[ https://issues.apache.org/jira/browse/HADOOP-4829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Todd Lipcon updated HADOOP-4829:
--------------------------------
Attachment: hadoop-4829-v2.txt
Thanks for the review, Tom. Here's an updated version with that change.
> Allow FileSystem shutdown hook to be disabled
> ---------------------------------------------
>
> Key: HADOOP-4829
> URL: https://issues.apache.org/jira/browse/HADOOP-4829
> Project: Hadoop Core
> Issue Type: New Feature
> Components: fs
> Affects Versions: 0.18.1
> Reporter: Bryan Duxbury
> Priority: Minor
> Attachments: hadoop-4829-v2.txt, hadoop-4829.txt
>
>
> FileSystem sets a JVM shutdown hook so that it can clean up the FileSystem cache. This
is great behavior when you are writing a client application, but when you're writing a server
application, like the Collector or an HBase RegionServer, you need to control the shutdown
of the application and HDFS much more closely. If you set your own shutdown hook, there's
no guarantee that your hook will run before the HDFS one, preventing you from taking some
shutdown actions.
> The current workaround I've used is to snag the FileSystem shutdown hook via Java reflection,
disable it, and then run it on my own schedule. I'd really appreciate not having to do take
this hacky approach. It seems like the right way to go about this is to just to add a method
to disable the hook directly on FileSystem. That way, server applications can elect to disable
the automatic cleanup and just call FileSystem.closeAll themselves when the time is right.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.