> +1>>> ________________________________> From: Stack <[EMAIL PROTECTED]>> To: HBase Dev List <[EMAIL PROTECTED]>> Sent: Friday, March 2, 2012 11:24 AM> Subject: DISCUSS: Have hbase require at least hadoop 1.0.0 in hbase 0.96.0?>> Should we make it so hbase 0.96.0 requires at least hadoop 1.0.0?> This would mean we would no longer support running on older versions> such as branch-0.20-append (and perhaps stuff like CDH2?)?>> Requiring Hadoop 1.0.0 at least means we can presume security and> append. We also narrow the set of hadoops we need to support> simplifying things for ourselves some.>> What you lot think?> St.Ack>

> Should we make it so hbase 0.96.0 requires at least hadoop 1.0.0?> This would mean we would no longer support running on older versions> such as branch-0.20-append (and perhaps stuff like CDH2?)?>> Requiring Hadoop 1.0.0 at least means we can presume security and> append. We also narrow the set of hadoops we need to support> simplifying things for ourselves some.>> What you lot think?> St.Ack>

I'm wondering why HDFS security support should be mandatory? Append makessense because there's no way to have a durable system without it.Security is currently an optional feature & implemented as an HBaseco-processor (vs core), correct? Is there a problem (other than minorinconvenience) with using introspection APIs for security in the core andthen warning if security is enabled but the API is unreachable?

Nicolas

On 3/2/12 3:50 PM, "Ted Yu" <[EMAIL PROTECTED]> wrote:

>Hadoop 0.22 currently doesn't support security.>>FYI>>On Fri, Mar 2, 2012 at 11:24 AM, Stack <[EMAIL PROTECTED]> wrote:>>> Should we make it so hbase 0.96.0 requires at least hadoop 1.0.0?>> This would mean we would no longer support running on older versions>> such as branch-0.20-append (and perhaps stuff like CDH2?)?>>>> Requiring Hadoop 1.0.0 at least means we can presume security and>> append. We also narrow the set of hadoops we need to support>> simplifying things for ourselves some.>>>> What you lot think?>> St.Ack>>

>________________________________> From: Stack <[EMAIL PROTECTED]>>To: HBase Dev List <[EMAIL PROTECTED]> >Sent: Friday, March 2, 2012 11:24 AM>Subject: DISCUSS: Have hbase require at least hadoop 1.0.0 in hbase 0.96.0?> >Should we make it so hbase 0.96.0 requires at least hadoop 1.0.0?>This would mean we would no longer support running on older versions>such as branch-0.20-append (and perhaps stuff like CDH2?)?>>Requiring Hadoop 1.0.0 at least means we can presume security and>append. We also narrow the set of hadoops we need to support>simplifying things for ourselves some.>>What you lot think?>St.Ack>>>

On Fri, Mar 2, 2012 at 12:57 PM, Nicolas Spiegelberg<[EMAIL PROTECTED]> wrote:> I'm wondering why HDFS security support should be mandatory? Append makes> sense because there's no way to have a durable system without it.> Security is currently an optional feature & implemented as an HBase> co-processor (vs core), correct? Is there a problem (other than minor> inconvenience) with using introspection APIs for security in the core and> then warning if security is enabled but the API is unreachable?>

We could try and do that.

The proposal is about pulling up the bottom end on the hadoop's wewill run on going forward. If all hadoop's from 1.0.0 on havesecurity, and we can depend on that being the case going forward, thenwe could do things like ship a single artifact rather than the two wecurrently ship; one that does not depend on a secure hadoop andanother that requires it.

I forgot that 0.22 hadoop doesn't have security. Would suggest thatwe drop support for it too in 0.96 hbase.

> Should we make it so hbase 0.96.0 requires at least hadoop 1.0.0?> This would mean we would no longer support running on older versions> such as branch-0.20-append (and perhaps stuff like CDH2?)?>> Requiring Hadoop 1.0.0 at least means we can presume security and> append. We also narrow the set of hadoops we need to support> simplifying things for ourselves some.>> What you lot think?> St.Ack>

> Should we make it so hbase 0.96.0 requires at least hadoop 1.0.0?> This would mean we would no longer support running on older versions> such as branch-0.20-append (and perhaps stuff like CDH2?)?>> Requiring Hadoop 1.0.0 at least means we can presume security and> append. We also narrow the set of hadoops we need to support> simplifying things for ourselves some.>> What you lot think?> St.Ack>

Pe 02.03.2012 21:24, Stack a scris:> Should we make it so hbase 0.96.0 requires at least hadoop 1.0.0?> This would mean we would no longer support running on older versions> such as branch-0.20-append (and perhaps stuff like CDH2?)?>> Requiring Hadoop 1.0.0 at least means we can presume security and> append. We also narrow the set of hadoops we need to support> simplifying things for ourselves some.>> What you lot think?> St.Ack

The UserGroupInformation API is incompatible between secure and nonsecure versions **of Hadoop** (among other issues). This leads to two issues:

- Runtime exceptions. We indeed do use reflection to do run time detection of which variant is available.

- Compile time errors. We can't do anything about this. Hence the separate profile.And just FYI security has two components: the totally optional coprocessor-based access controller, and the secure RPC engine as a plug in option. If you don't enable either you won't see any runtime errors; however we can't easily build a single artifact because the secure RPC engine, as it interacts with the Hadoop auth framework, must use UserGroupInformation.Best regards, - AndyProblems worthy of attack prove their worth by hitting back. - Piet Hein (via Tom White)

>________________________________> From: Stack <[EMAIL PROTECTED]>>To: [EMAIL PROTECTED] >Sent: Monday, March 5, 2012 11:59 PM>Subject: Re: DISCUSS: Have hbase require at least hadoop 1.0.0 in hbase 0.96.0?> >On Fri, Mar 2, 2012 at 12:57 PM, Nicolas Spiegelberg><[EMAIL PROTECTED]> wrote:>> I'm wondering why HDFS security support should be mandatory? Append makes>> sense because there's no way to have a durable system without it.>> Security is currently an optional feature & implemented as an HBase>> co-processor (vs core), correct? Is there a problem (other than minor>> inconvenience) with using introspection APIs for security in the core and>> then warning if security is enabled but the API is unreachable?>>>>We could try and do that.>>The proposal is about pulling up the bottom end on the hadoop's we>will run on going forward. If all hadoop's from 1.0.0 on have>security, and we can depend on that being the case going forward, then>we could do things like ship a single artifact rather than the two we>currently ship; one that does not depend on a secure hadoop and>another that requires it.>>I forgot that 0.22 hadoop doesn't have security. Would suggest that>we drop support for it too in 0.96 hbase.>>St.Ack>>>

On Tue, Mar 6, 2012 at 9:10 AM, Andrew Purtell <[EMAIL PROTECTED]> wrote:> ...however we can't easily build a single artifact because the secure RPC engine, as it interacts with the Hadoop auth framework, must use UserGroupInformation.>

OK. So security story needs a bit of work. Sounds like we haveenough votes though to require hadoop 1.0.0 at least in 0.96.

After that, I believe we can merge the security sources in. However we may have an issue going forward because UGI is an unstable/private API. Needs sorting out with core at some point.

Best regards,

- AndyOn Mar 6, 2012, at 9:55 AM, Stack <[EMAIL PROTECTED]> wrote:

> On Tue, Mar 6, 2012 at 9:10 AM, Andrew Purtell <[EMAIL PROTECTED]> wrote:>> ...however we can't easily build a single artifact because the secure RPC engine, as it interacts with the Hadoop auth framework, must use UserGroupInformation.>> > > OK. So security story needs a bit of work. Sounds like we have> enough votes though to require hadoop 1.0.0 at least in 0.96.> > St.Ack

We could, at the very least, mark UGI as LimitedPrivate for HBase and work with you guys to maintain compatibility for the future. Makes sense?

thanks,Arun

On Mar 6, 2012, at 10:21 AM, Andrew Purtell wrote:

> After that, I believe we can merge the security sources in. However we may have an issue going forward because UGI is an unstable/private API. Needs sorting out with core at some point. > > Best regards,> > - Andy> > > On Mar 6, 2012, at 9:55 AM, Stack <[EMAIL PROTECTED]> wrote:> >> On Tue, Mar 6, 2012 at 9:10 AM, Andrew Purtell <[EMAIL PROTECTED]> wrote:>>> ...however we can't easily build a single artifact because the secure RPC engine, as it interacts with the Hadoop auth framework, must use UserGroupInformation.>>> >> >> OK. So security story needs a bit of work. Sounds like we have>> enough votes though to require hadoop 1.0.0 at least in 0.96.>> >> St.Ack

> Andy - could you please start a discussion?>> We could, at the very least, mark UGI as LimitedPrivate for HBase and work with you guys to maintain compatibility for the future. Makes sense?>

That would probably help for internal usage of UGI in the secure RPCengine. As Andy points out, we do already encapsulate UGI in our ownorg.apache.hadoop.hbase.security.User class (which uses reflection toaccount for the API incompatibilities) outside of the RPC engine. Wedo also make direct use of some other Hadoop security classes toimplement secure RPC:

If we require Hadoop 1.0.0 then these others should at least beavailable, though I don't know the API stability of each. If wedon't, then the best way towards a single build for release seemscontinuing towards modularization so that the security classes can bebuilt in a separate jar and included in the classpath when enabled.Handling all of these interactions through reflection does not seemdesirable (or sane) to me.

Given that the token/ugi APIs are being used in other ecosystem components too (like Hive, HCatalog & Oozie), and in general, that security model will probably hold for other projects too, I think that its not an unfair expectation from Hadoop that it should maintain compatibility on UGI/Token* interfaces (*smile*).

On Mar 6, 2012, at 11:57 AM, Arun C Murthy wrote:

> Andy - could you please start a discussion? > > We could, at the very least, mark UGI as LimitedPrivate for HBase and work with you guys to maintain compatibility for the future. Makes sense?> > thanks,> Arun> > On Mar 6, 2012, at 10:21 AM, Andrew Purtell wrote:> >> After that, I believe we can merge the security sources in. However we may have an issue going forward because UGI is an unstable/private API. Needs sorting out with core at some point. >> >> Best regards,>> >> - Andy>> >> >> On Mar 6, 2012, at 9:55 AM, Stack <[EMAIL PROTECTED]> wrote:>> >>> On Tue, Mar 6, 2012 at 9:10 AM, Andrew Purtell <[EMAIL PROTECTED]> wrote:>>>> ...however we can't easily build a single artifact because the secure RPC engine, as it interacts with the Hadoop auth framework, must use UserGroupInformation.>>>> >>> >>> OK. So security story needs a bit of work. Sounds like we have>>> enough votes though to require hadoop 1.0.0 at least in 0.96.>>> >>> St.Ack> > --> Arun C. Murthy> Hortonworks Inc.> http://hortonworks.com/> >

The current support for multiple versions of HDFS is in my opinion actuallyone of the strengths of HBase, and the project will lose that advantage ifwe cut support for earlier versions of Hadoop. I think HBase should onlyrequire the simplest possible universally available subset of HDFS API, andsecurity should be an optional feature, discovered through reflection orenabled in some other ways.

We have a custom version of Hadoop at Facebook that is not planning toimplement security any time soon. This version of Hadoop runs underneathwhat we believe to be some of the largest existing production HBasedeployments. We are currently running the 0.89-fb version of HBase inproduction, but are considering moving to a more recent version of HBase atsome point, and it would be great to be able to do that independently ofchanging the underlying Hadoop distribution for migration complexityreasons. Currently we are able to run public HBase trunk on our version ofHadoop, but once in a while we have to satisfy new dependences on Hadoopfeatures that are added to HBase. If the changes proposed in this threadhappen, we would have to pull in a lot more security-related dependenciesinto our version of Hadoop and, most likely, implement a lot of no-opstubs. However, that may not be a trivial project, and it certainly wouldnot add any clarity or value to our Hadoop codebase or HBase / HDFSinteraction.

I imagine there are other custom flavors of Hadoop out there where HBasesupport would be desirable. For example, does MapR implement the samesecurity API as Hadoop 1.0.0 does? Restricting HBase to a smaller subset ofHadoop versions complicates life for existing users, and makes HBase a lesslikely choice for new users, who could go with something like Hypertablewhere they have an extra abstraction layer between the database and theunderlying distributed file system implementation.

> The current support for multiple versions of HDFS is in my opinion actually> one of the strengths of HBase, and the project will lose that advantage if> we cut support for earlier versions of Hadoop. I think HBase should only> require the simplest possible universally available subset of HDFS API, and> security should be an optional feature, discovered through reflection or> enabled in some other ways.>> We have a custom version of Hadoop at Facebook that is not planning to> implement security any time soon. This version of Hadoop runs underneath> what we believe to be some of the largest existing production HBase> deployments. We are currently running the 0.89-fb version of HBase in> production, but are considering moving to a more recent version of HBase at> some point, and it would be great to be able to do that independently of> changing the underlying Hadoop distribution for migration complexity> reasons. Currently we are able to run public HBase trunk on our version of> Hadoop, but once in a while we have to satisfy new dependences on Hadoop> features that are added to HBase. If the changes proposed in this thread> happen, we would have to pull in a lot more security-related dependencies> into our version of Hadoop and, most likely, implement a lot of no-op> stubs. However, that may not be a trivial project, and it certainly would> not add any clarity or value to our Hadoop codebase or HBase / HDFS> interaction.>> I imagine there are other custom flavors of Hadoop out there where HBase> support would be desirable. For example, does MapR implement the same> security API as Hadoop 1.0.0 does? Restricting HBase to a smaller subset of> Hadoop versions complicates life for existing users, and makes HBase a less> likely choice for new users, who could go with something like Hypertable> where they have an extra abstraction layer between the database and the> underlying distributed file system implementation.>> Thanks,> --Mikhail>> On Wed, Mar 7, 2012 at 10:20 AM, Devaraj Das <[EMAIL PROTECTED]> wrote:>> > Given that the token/ugi APIs are being used in other ecosystem> components> > too (like Hive, HCatalog & Oozie), and in general, that security model> will> > probably hold for other projects too, I think that its not an unfair> > expectation from Hadoop that it should maintain compatibility on> UGI/Token*> > interfaces (*smile*).> >> > On Mar 6, 2012, at 11:57 AM, Arun C Murthy wrote:> >> > > Andy - could you please start a discussion?> > >> > > We could, at the very least, mark UGI as LimitedPrivate for HBase and> > work with you guys to maintain compatibility for the future. Makes sense?> > >> > > thanks,> > > Arun> > >> > > On Mar 6, 2012, at 10:21 AM, Andrew Purtell wrote:> > >> > >> After that, I believe we can merge the security sources in. However we> > may have an issue going forward because UGI is an unstable/private API.> > Needs sorting out with core at some point.> > >>> > >> Best regards,> > >>> > >> - Andy> > >>> > >>> > >> On Mar 6, 2012, at 9:55 AM, Stack <[EMAIL PROTECTED]> wrote:> > >>> > >>> On Tue, Mar 6, 2012 at 9:10 AM, Andrew Purtell <[EMAIL PROTECTED]>> > wrote:> > >>>> ...however we can't easily build a single artifact because the> secure> > RPC engine, as it interacts with the Hadoop auth framework, must use> > UserGroupInformation.> > >>>>> > >>>> > >>> OK. So security story needs a bit of work. Sounds like we have> > >>> enough votes though to require hadoop 1.0.0 at least in 0.96.> > >>>> > >>> St.Ack> > >> > > --> > > Arun C. Murthy> > > Hortonworks Inc.> > > http://hortonworks.com/> > >> > >> >> >>

The security related APIs should be promoted to public/stable given their increasing adoption.

At least on the HBase side, I'll take the pain once to rework our related sources if the APIs on their way to stability make one more change. However, it would be preferable to avoid further need for hacks. Use of reflection can ride over an API in transition, but it can also punt breakage due to API change to runtime, where we'd least like to see it for the first time.

> Given that the token/ugi APIs are being used in other ecosystem components too (like Hive, HCatalog & Oozie), and in general, that security model will probably hold for other projects too, I think that its not an unfair expectation from Hadoop that it should maintain compatibility on UGI/Token* interfaces (*smile*). > > On Mar 6, 2012, at 11:57 AM, Arun C Murthy wrote:> >> Andy - could you please start a discussion? >> >> We could, at the very least, mark UGI as LimitedPrivate for HBase and work with you guys to maintain compatibility for the future. Makes sense?>> >> thanks,>> Arun>> >> On Mar 6, 2012, at 10:21 AM, Andrew Purtell wrote:>> >>> After that, I believe we can merge the security sources in. However we may have an issue going forward because UGI is an unstable/private API. Needs sorting out with core at some point. >>> >>> Best regards,>>> >>> - Andy>>> >>> >>> On Mar 6, 2012, at 9:55 AM, Stack <[EMAIL PROTECTED]> wrote:>>> >>>> On Tue, Mar 6, 2012 at 9:10 AM, Andrew Purtell <[EMAIL PROTECTED]> wrote:>>>>> ...however we can't easily build a single artifact because the secure RPC engine, as it interacts with the Hadoop auth framework, must use UserGroupInformation.>>>>> >>>> >>>> OK. So security story needs a bit of work. Sounds like we have>>>> enough votes though to require hadoop 1.0.0 at least in 0.96.>>>> >>>> St.Ack>> >> -->> Arun C. Murthy>> Hortonworks Inc.>> http://hortonworks.com/>> >> >

I have no strong opinion either way: separate profile or merge could be made to work. I'm happy to maintain security related sources as a module as long as the necessary accommodations are made by other devs; e.g. don't break our sources by changing the coprocessor API or RPC without also fixing up the security module or at least making it straightforward for us to do those fixups.

> The current support for multiple versions of HDFS is in my opinion actually> one of the strengths of HBase, and the project will lose that advantage if> we cut support for earlier versions of Hadoop. I think HBase should only> require the simplest possible universally available subset of HDFS API, and> security should be an optional feature, discovered through reflection or> enabled in some other ways.> > We have a custom version of Hadoop at Facebook that is not planning to> implement security any time soon. This version of Hadoop runs underneath> what we believe to be some of the largest existing production HBase> deployments. We are currently running the 0.89-fb version of HBase in> production, but are considering moving to a more recent version of HBase at> some point, and it would be great to be able to do that independently of> changing the underlying Hadoop distribution for migration complexity> reasons. Currently we are able to run public HBase trunk on our version of> Hadoop, but once in a while we have to satisfy new dependences on Hadoop> features that are added to HBase. If the changes proposed in this thread> happen, we would have to pull in a lot more security-related dependencies> into our version of Hadoop and, most likely, implement a lot of no-op> stubs. However, that may not be a trivial project, and it certainly would> not add any clarity or value to our Hadoop codebase or HBase / HDFS> interaction.> > I imagine there are other custom flavors of Hadoop out there where HBase> support would be desirable. For example, does MapR implement the same> security API as Hadoop 1.0.0 does? Restricting HBase to a smaller subset of> Hadoop versions complicates life for existing users, and makes HBase a less> likely choice for new users, who could go with something like Hypertable> where they have an extra abstraction layer between the database and the> underlying distributed file system implementation.> > Thanks,> --Mikhail> > On Wed, Mar 7, 2012 at 10:20 AM, Devaraj Das <[EMAIL PROTECTED]> wrote:> >> Given that the token/ugi APIs are being used in other ecosystem components>> too (like Hive, HCatalog & Oozie), and in general, that security model will>> probably hold for other projects too, I think that its not an unfair>> expectation from Hadoop that it should maintain compatibility on UGI/Token*>> interfaces (*smile*).>> >> On Mar 6, 2012, at 11:57 AM, Arun C Murthy wrote:>> >>> Andy - could you please start a discussion?>>> >>> We could, at the very least, mark UGI as LimitedPrivate for HBase and>> work with you guys to maintain compatibility for the future. Makes sense?>>> >>> thanks,>>> Arun>>> >>> On Mar 6, 2012, at 10:21 AM, Andrew Purtell wrote:>>> >>>> After that, I believe we can merge the security sources in. However we>> may have an issue going forward because UGI is an unstable/private API.>> Needs sorting out with core at some point.>>>> >>>> Best regards,>>>> >>>> - Andy>>>> >>>> >>>> On Mar 6, 2012, at 9:55 AM, Stack <[EMAIL PROTECTED]> wrote:>>>> >>>>> On Tue, Mar 6, 2012 at 9:10 AM, Andrew Purtell <[EMAIL PROTECTED]>>> wrote:>>>>>> ...however we can't easily build a single artifact because the secure>> RPC engine, as it interacts with the Hadoop auth framework, must use>> UserGroupInformation.>>>>>> >>>>> >>>>> OK. So security story needs a bit of work. Sounds like we have>>>>> enough votes though to require hadoop 1.0.0 at least in 0.96.

On Wed, Mar 7, 2012 at 11:42 AM, Mikhail Bautin<[EMAIL PROTECTED]> wrote:> The current support for multiple versions of HDFS is in my opinion actually> one of the strengths of HBase, and the project will lose that advantage if> we cut support for earlier versions of Hadoop.

It just gets a little tough to keep up when the span to support isbroad: branch-0.20-append up through 0.23.x. I'm not sure if itstenable keeping it up after we get beyond a certain breadth.

The issue that prompted this discussion was in part "HBASE-5419)FileAlreadyExistsException has moved from mapred to fs package", ahelpful patch by Dhruba to get us off a deprecated class. Itsapplication will break our building against hadoop's older than 1.0.0(I believe).

I suppose we can keep up (hacky) reflection but at a certain stage itsmaintenance becomes "difficult".

> I think HBase should only> require the simplest possible universally available subset of HDFS API

This notion. I like. How would we ensure we keep to a narrow subset(excepting security for the moment)? If we want to use an exotic hdfsapi, we go there via reflection?

>, and> security should be an optional feature, discovered through reflection or> enabled in some other ways.>

If we can't assume 1.0.0, and you've made a point that we can't andshouldn't (because we'd be leaving behind our biggest deploy -- whichwould just be silly), then security is done via the modularizationroute that has been discussed previous and that has had some workapplied (you fellas good w/ that?).> If the changes proposed in this thread> happen, we would have to pull in a lot more security-related dependencies> into our version of Hadoop and, most likely, implement a lot of no-op> stubs.

Lets not have you have to do this.

How do you suggest we ensure we minimize you or anyone else having toaddress '...new dependencies on Hadoop features that are added toHBase'?>> I imagine there are other custom flavors of Hadoop out there where HBase> support would be desirable. For example, does MapR implement the same> security API as Hadoop 1.0.0 does?

I don't know. I hoped the lads over there would speak up if this werea suggestion that would mess them up.

The support-all-flavors stance, especially on branches as opposed toreleases requires us to maintain shims for different versions thus requiresus to expend energy managing this complexity instead of improving HBase'score.

I'm not convinced about the new user argument -- if folks are completelynew, I'd imagine they'd most likely start by going with the herd andpicking a DFS that most folks use (such as an apache hadoop1.0.0, a cdhversion, or possibly a mapr version). In the case of cdh/mapr or internalcustom build it would be the responsibility of the packager to maintain andsupport their own idiosyncrasies or limitations.

I feel some sympathy towards the existing user argument (we have plenty todeal with) -- a compromise may be to have hbase core tested and focused ona small number of hdfs versions (apache hadoop 1.0.0 and apache hadoop0.23.x are my first suggestions) and to have an interface that isolates allthe the reflection checks that are currently sprinkled throughout the codebase into an interface which can be targeted to support other specificHDFS/DFS flavors. This would be saner and could explicitly be tested.

My guess is that this problem isn't just for the user/security API -- Ibelieve there may be performance improvements and api improvements in newerHDFS's that we may want to take advantage of and would need reflection tobe discovered as well.

> The current support for multiple versions of HDFS is in my opinion actually> one of the strengths of HBase, and the project will lose that advantage if> we cut support for earlier versions of Hadoop. I think HBase should only> require the simplest possible universally available subset of HDFS API, and> security should be an optional feature, discovered through reflection or> enabled in some other ways.>> We have a custom version of Hadoop at Facebook that is not planning to> implement security any time soon. This version of Hadoop runs underneath> what we believe to be some of the largest existing production HBase> deployments. We are currently running the 0.89-fb version of HBase in> production, but are considering moving to a more recent version of HBase at> some point, and it would be great to be able to do that independently of> changing the underlying Hadoop distribution for migration complexity> reasons. Currently we are able to run public HBase trunk on our version of> Hadoop, but once in a while we have to satisfy new dependences on Hadoop> features that are added to HBase. If the changes proposed in this thread> happen, we would have to pull in a lot more security-related dependencies> into our version of Hadoop and, most likely, implement a lot of no-op> stubs. However, that may not be a trivial project, and it certainly would> not add any clarity or value to our Hadoop codebase or HBase / HDFS> interaction.>> I imagine there are other custom flavors of Hadoop out there where HBase> support would be desirable. For example, does MapR implement the same> security API as Hadoop 1.0.0 does? Restricting HBase to a smaller subset of> Hadoop versions complicates life for existing users, and makes HBase a less> likely choice for new users, who could go with something like Hypertable> where they have an extra abstraction layer between the database and the> underlying distributed file system implementation.>> Thanks,> --Mikhail>> On Wed, Mar 7, 2012 at 10:20 AM, Devaraj Das <[EMAIL PROTECTED]> wrote:>> > Given that the token/ugi APIs are being used in other ecosystem> components> > too (like Hive, HCatalog & Oozie), and in general, that security model> will> > probably hold for other projects too, I think that its not an unfair> > expectation from Hadoop that it should maintain compatibility on> UGI/Token*> > interfaces (*smile*).> >> > On Mar 6, 2012, at 11:57 AM, Arun C Murthy wrote:

On Thu, Mar 8, 2012 at 12:31 AM, Jonathan Hsieh <[EMAIL PROTECTED]> wrote:> I feel some sympathy towards the existing user argument (we have plenty to> deal with) -- a compromise may be to have hbase core tested and focused on> a small number of hdfs versions (apache hadoop 1.0.0 and apache hadoop> 0.23.x are my first suggestions) and to have an interface that isolates all> the the reflection checks that are currently sprinkled throughout the code> base into an interface which can be targeted to support other specific> HDFS/DFS flavors. This would be saner and could explicitly be tested.>

HBASE-5074 introduces HFilesystem, the hbase filesystem. In this newlayer, HBASE-5074 does the new checksum facility. It includes fakinga call that is in a new hdfs that is not in older versions. Perhapsits here that we should move all of our reflectioneering so itscontained and grokable?

After some more internal discussion, we decided it might not be too hardfor us to implement stubs in our version of HDFS to accommodate the new APIrequirements on the HBase side.

Putting some of the HDFS multi-version support plumbing in HFileSystemsounds like a good idea going forward, though, even if we are removingsupport for some of the versions.

Thanks,--Mikhail

On Thu, Mar 8, 2012 at 9:08 AM, Stack <[EMAIL PROTECTED]> wrote:

> On Thu, Mar 8, 2012 at 12:31 AM, Jonathan Hsieh <[EMAIL PROTECTED]> wrote:> > I feel some sympathy towards the existing user argument (we have plenty> to> > deal with) -- a compromise may be to have hbase core tested and focused> on> > a small number of hdfs versions (apache hadoop 1.0.0 and apache hadoop> > 0.23.x are my first suggestions) and to have an interface that isolates> all> > the the reflection checks that are currently sprinkled throughout the> code> > base into an interface which can be targeted to support other specific> > HDFS/DFS flavors. This would be saner and could explicitly be tested.> >>> HBASE-5074 introduces HFilesystem, the hbase filesystem. In this new> layer, HBASE-5074 does the new checksum facility. It includes faking> a call that is in a new hdfs that is not in older versions. Perhaps> its here that we should move all of our reflectioneering so its> contained and grokable?>> St.Ack>

NEW: Monitor These Apps!

All projects made searchable here are trademarks of the Apache Software Foundation.
Service operated by Sematext