Many other reasons
to have that.
I think our final speaker of the
day-- so I think you talk
about so far during the day
about the packaging, the
control, and all the
different aspects.
And very much the focus I would
say, so far, has been on
premise apps.
So I think the last speaker,
Chris Markle from Aspera--
glad to have you here--
to talk a little about how
some of those things do
actually apply in the
cloud world as well.
So a lot of the basics still
remain the same.
So kind of a little bit insight
into the cloud offering.
Chris.
Take myself out right
off the bat.
Thanks, Prakash.
[INAUDIBLE]
Let me get the operation
straight here.
Great.
So hello, folks.
We talked earlier about maybe
the slot right after lunch
isn't so good or the slot
right before lunch.
But then there's the last slot
of the day lunch as well.
So I'll try to be quick.
But I'm like the
other speakers.
I'm happy to take questions
as we go.
So raise your hands.
I'm going to pull out my
timer here and make
sure I stay on time.
But if you have questions,
feel free
to hit me with those.
My name is Chris Markle.
I'm the VP of engineering
services for Aspera.
My responsibilities at Aspera
include, among other things,
working on the company's
business systems when they
come close to the engineering
of the product.
So for example, software
licensing.
I've been responsible for
producing our Legacy Software
Licensing Management System,
which we rolled ourselves in
the old days.
And now I'm working in cloud
metering and cloud billing.
And it's that world that's got
me involved with SafeNet and
some of their products.
And I'm happy to be here today
to speak a little bit about
our implementation.
Here's the agenda for today.
I'm going to tell you a little
bit about who Aspera is, why
we wanted to provide some
cloud based offerings, a
little bit of our history in
the cloud, where we are at
this juncture, and then kind
of what we wanted from
entitlement management
solutions, and how
we went about it.
And we'll tell you a little
bit-- like some of the other
speakers did-- about the
next steps for us in
this area as well.
I have to say I prefer
[? Thorsten's ?]
company's mission statement,
which is something like help
cure cancer.
Like three or four words.
Ours is slightly longer.
But we can help cure slow
data transfers maybe.
But our mission statement is
creating next generation
transport technologies that
move the world's digital
assets at maximum speed,
regardless of file size,
transfer distances, and
network conditions.
So fundamentally, Aspera is a
software vendor who makes high
speed file transfer and data
movement solutions.
Our company is based in
Emeryville, California.
We were started in 2004.
We're 110 employees right now,
privately held and focusing,
as I said before, in the high
speed data movement area.
We created our own protocol for
that that we call FASP.
Once upon a time that stood for
Fast And Secure Protocol.
We have one of those interesting
acronyms where
we've changed what that
stands for without
changing the letters.
But I don't even know what the
new variation of that is.
But it's a UDP based file
transfer protocol that very
effectively transmits data at
almost up to the full line
speed of your network
connection.
And you might say, like I did
when I joined Aspera five
years ago, people pay
for file transfer?
Well, what's wrong with the file
transfer solutions that
are out there today.
But TCP and other file transfer
solutions have
limitations when you begin to
cover longer distances or you
run over networks that
are more lossy.
And these are getting exposed
more and more as people buy
bigger and bigger
transfer pipes.
So we have customers--
for example, ESPN runs a 10
gigabit fiber link between
Connecticut and LA.
And they pay a lot of
money for that 10
gigabit fiber link.
And they want to use and
fully exploit that 10
gigabit fiber link.
And using software like us, we
can help them derive the full
value of those kind of assets.
We have about 1,600 customers,
12,000 licenses.
We're growing about
50% a year.
And our products are sold--
fundamentally, we're an
enterprise software vendor.
We sell with the direct sales
force, resellers, OEMs,
perpetual licenses,
the classical--
kind of like almost the old
style software vendor.
One thing I'd like to highlight
is our current
software licenses, or our legacy
software licenses, were
typically oriented around a
maximum amount of bandwidth.
Since we felt that our solution
helped you more
effectively use your bandwidth,
we kind of
sell it that way.
So a typical customer might
buy a 45 megabit license.
So it's not user based.
It's kind of server based and
bandwidth based, typically.
And this kind of posed us
problems when we thought about
the cloud because the cloud is
really like a big pile of VMs.
And we didn't want our customers
to be able to sort
of freely run that software on
as many instances in the cloud
as they wanted without
paying us for it.
We actually had, in the old
days, terms and conditions
that said our customers aren't
permitted to run our software
in the cloud.
So funny thing is somebody asked
me today what about VMs?
And I said, even though VMs are
kind of like the same as
the cloud, we didn't have a
similar restriction with
respect to VMs, which I really
don't understand why we felt
like VMs were OK, and
the cloud wasn't OK.
But anyway fixing that problem
of this bandwidth license and
letting customers be able to run
as much of us as they want
in the cloud was something that
we wanted to work on.
So our biggest customers
are in the media and
entertainment space.
But basically, anybody that
has the triple set of
attributes of large amounts of
data, geographic dispersal of
who they are talking to like
big ecosystem all over the
world, and time criticality.
Those are the kind of people
that pay for file transfer.
And as I think our company has
proven, it's very possible to
make a business doing something
even when there's
commodity file transfer
solutions like HTTP, secure
copy, FTP, et cetera.
Real quickly about
our protocol.
The protocol, as I mentioned
before, is built.
It runs on top of UDP.
So we don't want to leverage TCP
since it has these issues
that I mentioned earlier of
breaking down a little bit
when there's latency in the
network, or packet loss, or
long distance.
The bars you see in yellow are
the performance of TCP under
increased latency and under
increased packet loss.
And you can kind of see a pretty
significant break down
as latency increases and as
packet loss increases.
The blue graph, the blue bar,
shows the performance of the
FASP protocol in the
same conditions.
So you can see even under large
amounts of packet loss
and lots of delay, our protocol
can still effectively
use the full bandwidth that's
available to us over the
internet connection.
And then we have all the usual
mom and apple pie things like
we're secure, we're manageable,
et cetera.
Over time, the company has built
a series of software
products in a portfolio around
this core protocol.
So we have servers that our
customers can procure to do
file transfers.
We have various clients,
a web client.
We have iOS and Android clients,
et cetera that
implement these protocols
as well.
And then web applications,
systems management,
synchronization, et cetera
all built on
top of the same framework.
So that's a little
bit about Aspera.
OK, so I mentioned earlier we
precluded our customers from
running in the cloud, mostly
from, I think, from a
standpoint of fear and kind of
a concern that we would kind
of get abused from a customer
running in the cloud
perspective.
And the biggest concern
was what if the
customer spins up multiple--
they're running us in one image,
which might be OK.
But then they say, let's just
duplicate that thing.
You can do that so easily
in the cloud.
Next thing you know, they have
50 instances running.
We had a little bit of
rudimentary software license
protection in our product.
But it was home grown.
It was just enough to kind of
satisfy us and keep customers
from making basic mistakes about
running our product.
But it wasn't a real rich,
fully enforcing license
solution that we could
count on to
help us in this situation.
So first of all, we had a base
of infrastructure, of servers,
that people wanted to run us on
that was unavailable to us.
That was kind of our
own dictate.
So we were restricting
our target market.
We wanted to increase it.
We had customers beginning to
ask us about this as well.
The media and entertainment
customers, in particular, were
beginning to use the cloud
mostly from an
economic point of view.
They were finding it cheaper
to acquire and run their
operations in the cloud.
A noteworthy example that I
don't know if you guys are
familiar with or not-- they
run like a ton of their
operation in the cloud--
is Netflix.
And Netflix is a big customer of
ours and is using Aspera to
do [? ingress ?]
and egress from the cloud.
So that's an example of who have
been going to the cloud
to kind of drive down
their cost.
Also that bandwidth model--
while I have to say--
and maybe we shut off the tape
at this point-- but I have to
say, the bandwidth model
more favors us
than it does our customers.
The selling by bandwidth has
constructed a very nice
economic model for us.
But customers aren't always
happy about having to pay us
for these licenses on
a bandwidth basis.
Imagine you're--
I'm not saying ESPN felt like
this-- but imagine you own
that 10 gigabit pipe.
And you run a client server
application over it.
Why should that run at
a restricted speed?
And we'll say, well, everything
runs at a
restricted speed
unless it's us.
And we're the only ones who can
help you leverage that.
So you have to pay for it.
But people were not always
comfortable with it.
Some people wanted
an alternative.
They wanted a usage
based alternative.
They felt that might be
more fair to them.
So that was another reason.
Then another powerful reason was
our company prides itself
on its technology and its
development prowess.
And whether we deserve that or
not I'm not going to say.
But we work hard on this file
transfer problem and solving
these kind of problems.
And we had come up with a way
to permit very high speed
access directly from the network
into Amazon blob
storage, Amazon's object
storage, what they call the
simple storage service,
Amazon's S3.
So we wanted to make that
available to customers.
And to do that, we had
to terminate the
solution in the cloud.
So to get into S3 at high speed,
we need at least one
end to live in S3, right
next to the S3.
So we kind of had a technical
reason to be in
the cloud as well.
A few other maybe lesser
important reasons, but note
worthy as well.
We had customers that,
especially in media and
entertainment, that are
very project oriented.
They might spin up an effort
to help dub a movie, or do
sound effects, or something.
These projects are
short lived.
Maybe they're three months.
Maybe they're six months.
The idea of perpetual licenses
was bothersome to these kind
of customers.
They wanted to use
our product.
But buying it forever at a high
speed, at a higher price
really didn't make
sense for them.
Another one that we haven't done
yet, but there's threats
to do it is to support our on
premise customers with this
model as well.
So we still sell on premise the
old way we always have,
bandwidth based.
But now that we have the
infrastructure, there's
nothing at all that stops us
from doing the same thing
we're doing in the cloud and
turning on that feature that
I'll show you more about later
for an on premise customer and
having them pay on a usage
basis as well.
Finally, we really wanted to
just kind of get into the
usage based game.
There were some of us that felt
like, hey, we really need
to be doing this.
We need to explore usage base.
We need to get in the game.
We need to learn about it.
We need to get our
toe in the water.
And also, like a number of
people have mentioned, we
thought maybe there was stuff we
could learn from the usage
once we started seeing it.
And it's funny.
When I show our CEO usage--
we have a pretty much small
number of customers at this
point using this.
But it's really funny
her reaction when
she sees the usage.
All she sees are gigabytes
transferred up and down.
But she gets so excited because
I think it brings home
to her sort of what people are
using our product for.
And she gets to kind
of see actual data.
And it really excites her.
So maybe it will go from more
than just exciting the CEO to
helping us make decisions
and whatnot.
But these were some of
the reasons we were
interested in the cloud.
So here's a little
bit of history.
Before 2011, actually late
2009, obviously we
weren't in the cloud.
We restricted our customers
from running in the cloud.
The only people we let use
Aspera in the cloud were OEMs,
or software vendors, maybe like
yourself who might have
cut a special deal with us.
And those people were permitted
to run in the cloud.
In late 2009, we started selling
instances of our
product that were built and
deployed using Amazon DevPay.
I don't know if you guys
are familiar with that.
I'm not even sure I can do
justice to explaining DevPay.
But basically, we would make
instances in Amazon of a Linux
operating system with our
software in it, et cetera.
And we would tie that with an
Amazon program called DevPay,
like [? SignIt ?], or
install DevPay's
facilities kind of in that.
And then customers that ran
that would get our charges
added to their bill.
So that was kind of nice because
we just get paid money
from Amazon.
But we had very restricted
ability to do that charging.
So what we did was we charged
a flat rate to run the
instance per month.
And then we charged an uplift on
Amazon's network I/O count
because Amazon charges
on three dimensions.
They charge on CPU.
They charge on I/O. And they
charge on network I/O.
So we said we'll uplift the
network I/O figuring most of
it is going to be us anyway.
And customers will be able to
buy and pay monthly for that
product that way.
And we had a fair amount of
interest in that and people
take that up.
And that was generating anywhere
from 750 to maybe
three grand a month for these
customers that ran that.
We weren't very happy the
charging on the network I/O.
We wanted to charge on
like our metrics.
It wasn't bad.
But that was what we
really wanted.
I have to say we did like the
fact that Amazon had the
billing relationship
with the customer.
We just got paid.
That was really nice.
But we also lacked some
visibility to the
customer, et cetera.
So we're phasing that solution
out as we phase our new
solution in.
Really kind of as an experiment
earlier this year
we put one of our offerings,
a web product that we have
called Faspex, which is like
a store and forward file
transfer product.
So two people can collaborate
using this product.
So for example, Prakash could
send me a package.
And I could download it.
So it's kind of like email
paradigm, et cetera.
We put that in the Amazon
marketplace when Amazon
marketplace first came out.
So you can buy--
again, it's kind of
like buying an
instance of our product.
You can buy that through
the Amazon marketplace.
With all these marketplaces, we
pay a percentage to Amazon
on those transactions.
Again, Amazon has the billing
relationship with the customer
and pays us.
So that's nice.
But we are probably only going
to use this program tactically
moving forward maybe for
certain special kind of
programs or things like that.
We really want to get 100% of
the software revenue and not
give that kind of
cut to Amazon.
I'd say that's the primary
reason for that.
In July of this year, we
released what we call Aspera
on demand version two, which is
our first software offering
in the cloud that
is user based.
And we have made a couple
different products out of that
technology that are all being
sold on a user based model.
So what you buy from us is a
fixed amount of transfer.
So a typical user might buy
server on demand, 500
gigabytes a month, which is
included, and then any over
overage on that we might charge
you $1 a gigabyte or
something like that.
We're billing now ourselves.
And this is the place where we
started needing like hey, we
got to count usage.
We got to deal with this
usage problem.
And it was the construction of
that product that brought us
into a relationship with SafeNet
and us deciding to use
and employ the SafeNet Sentinel
cloud product.
So I'll talk a little bit more
about where we are at this
exact point in the cloud and
kind of SafeNet implementation
a little bit more.
Any questions so far?
So with our new product we
have built four products.
You see three of them here.
You see a transfer server, which
is our server product
running as an instance
in Amazon.
So you're buying access
to an instance
which you run and control.
So it's not a software
as a service.
I actually like that.
Is this SAAS?
Is this IAAS?
Is this PAAS?
I don't know what this is.
It's not SAAS.
But maybe what we're selling you
is our software running in
an instance, which you
then run and operate.
So that sort of feels like
platform as a service.
But some people say things like
Salesforce, and Heroku,
and things like that are
platform as a service.
I don't feel like that.
So I don't know what
as a service it is.
But basically, the customer buys
the ability to run Amazon
instances that run
our software.
So you can buy the server.
You can buy that Faspex
collaboration product that I
mentioned earlier.
And you can buy another
different web app that we
offer for file sharing.
A little bit of a different
paradigm.
All these are usage based.
And all of these support the
ability to use, very
effectively, the Amazon S3
storage on the back end.
So I wanted to just highlight
that real quickly because this
is the technology that we had
that kind of also helped drive
us to the cloud.
So sorry for the rather
complicated picture here.
But basically, if you run an
Aspera server in the cloud,
it's sitting, in effect,
alongside S3.
You can use our transfer
protocol from a remote client,
transfer using the FASP protocol
into that server, and
then do a very high speed
proxying directly into S3.
So we have done some testing
of this, for example, using
maybe standard S3 transfer
clients, things like cloud
barrier, things like that
might do this parallel
transfers to S3 to try to
improve performance.
And we've seen something like
an eight to one difference.
So something like a CloudBerry
doing parallel file transfers
over HTTP to S3 can get
maybe 100 megabits.
We have demonstrated up to 700
or 800 megabits into and out
of S3 using this kind of proxy
between FASP and S3 storage.
We're building the same thing
right now for Microsoft Azure
BLOB storage.
And every one of these we do
is very challenging because
it's kind of easy to
get it working.
It's hard to get
it performing.
But this is the technology that
is in all these servers
that lets them access
S3 at high speed.
OK, so we had to
go usage based.
I started thinking about maybe
we should go outside to find
this approach.
Frankly, that was a little
controversial in our company
because we have a real
engineering culture.
I think Rob mentioned, as well,
like I could use my guys
to build this.
And we kind of say that.
But we could have made it.
We actually built a
proof of concept--
rails app that--
because we have a lot
of rails experience.
We built a rails app proof
of concept, et cetera.
But I was just thinking wow,
we don't really want to get
into operating a central service
building, this thing
hardening it, et cetera.
Just please, no.
And there's a funny
story there.
Our sales rep at the time
sent me an email.
I was not aware of SafeNet.
I had not talked to SafeNet
or anything.
He sends me an email around
4:00 PM one day.
He was out of all Baltimore,
so it was
one o'clock his time.
I get the email.
The instant I see the email,
I see sentinel cloud.
So clever product naming
with cloud in there.
I literally picked the phone
up and called him back.
And he was sort of like oh my
God, this guy must be really
pissed that I sent him this
email because why else would
else would somebody be calling
me back that fast?
But I called him back.
And I said, let me ask you two
questions right off the bat.
And Rob, you had concern
about one of these.
One was you've got to
be multi-platform.
You can't be one of these
solutions that only runs on
Windows, or only runs on Linux,
or only runs on Mac.
It's got to be multi-platform.
And please tell me that you
don't require hardware, like a
hardware fob or something
like that.
That'll just make our founder'
heads explode.
And he said, yeah, we're
multi-platform.
And no, we don't require
hardware.
And so off we went on
a conversation.
And there were obviously way
more considerations than that.
And I'll talk about
a few of them.
But that first email and phone
call was pretty funny.
But we wanted to be able to
license by feature set.
A number of these speakers
have talked about--
besides counting usage,
which is kind of the
second item on here.
But we also wanted to define
our products sort of
externally and let the product
ask for its entitlement and
features and then morph itself
dynamically according to
whatever comes back.
And we can take that farther
than we have.
But that was kind of
critical to us.
So we wanted to tap
the features
from an external source.
And we wanted to record usage.
We also wanted the thing
to be centralized.
And we were going to have to
run a solution in the cloud
ourselves or get one from
somebody that ran centrally.
So we had nothing like that.
We didn't even do activation
for our--
we had no central service to
even leverage for this.
We also wanted to be able to
access the system with web
APIs from tooling, from our
sales force, from our
NetSuite, possibly from other
systems that we might get that
would be involved in this.
So we wanted kind of like a web
programming interface or a
decent programming interface to
help snap it together with
other systems.
We, at the same time, were
looking at a subscription
billing solution.
We ended up selecting
Aria for that.
And so we wanted--
that kind of relates to my
earlier comment-- we wanted to
be able to integrate with the
subscription billing system.
And we wanted it to scale.
We didn't want to build the
infrastructure ourselves.
And maybe as a dream, we wanted
to be able to do a
little bit of this building,
letting the product manager,
or people besides engineering,
like construct products out of
components, et cetera.
We're not there.
But we kind of like
that model.
So here's how--
now, in my presentation,
you might see me
refer to this a SafeNet.
You might hear me refer to this
as Sentinel, or SafeNet
Sentinel, or SafeNet
Sentinel cloud.
But I believe the proper name
for what we ended up using was
Sentinel cloud from SafeNet.
Any time I use any of those
words, that's what I mean.
Here's how their solution
appeared to meet our needs and
so far has.
First of all, we wanted
to be able to
license by feature set.
So first of all, the SafeNet
product supports the ability
to model products that way.
Their natural interface is to
model products as features
with the kind of license model
associated with those.
So very easy to model
by features.
I would say that being kind of
neophytes to complex software
licensing--
I mean we had software
licensing.
We use like a signed XML
license in our product.
But we weren't pros
from Dover when it
comes to software licensing.
It did take us a little while to
kind of like wrap our head
around entitlements, products,
license models, features, et
cetera from this solution.
But we were able to do that.
And clearly, being able to
have products composed of
features and being able to turn
those on and off was very
helpful to our cause.
Obviously, the SDK
supported that.
And I could access those same
things through their EMS
system through its
web interface.
We wanted to be able to license
by usage as well.
It's straightforward to record
usage with this product and
again, through the SDK access
usage the EMS and whatnot.
The automation integration, et
cetera leveraging very heavily
the REST-like interface to their
enterprise management
system in our world.
Right now, we're doing a lot of
that through sort of custom
Ruby scripting to tie system
x to y at this juncture.
So we're heavily using
AMSes, APIs.
And it's proven very
helpful for us.
Frankly, on scaling,
we trusted SafeNet.
We didn't really test
this very much.
We trusted it.
We were concerned about what
was cached and what wasn't.
And so we looked specifically
at that and kind of
liked what we saw.
But we trusted them to scale.
And the things we're doing to
communicate to SafeNet are
done at sort of leisurely
intervals.
We were doing everything
every five minutes.
So it's not like we're pounding
on the system or
anything like that.
And so far, it's been
scaling fine and
operating very reliably.
But this is something we frankly
just trusted them on.
We weren't even sure how we
would test this, frankly.
We didn't want to build
it ourselves, at
least some of us didn't.
They provide all the parts we
needed to do this, SDKs, the
back end, the EMS system,
et cetera is
built and run by SafeNet.
And we could just
use it and not
have to build it ourselves.
So here are our steps that
we went through.
We did a little bit of research
on solutions.
That was sort of a
funny process.
It kind of went like this, maybe
we'll go look at FlexLM.
And that was the first
card we played.
And then like about five people
at our company raised
their hand and said,
no, no, no.
We've used that before.
Not interested.
So we basically didn't
look at that.
We learned about Sentinel
in the process.
We did consider rolling around
or extending the proof of
concept we had done.
And then we also took
a look at--
I'm sorry.
I don't know the name
of the company.
But there was another company
who was out of Texas founded
by the guys that wrote FlexLM.
So you could call it like
FlexLM done over again.
And as a software guy, I'm
always interested in doing it
over again because if you've
written software, usually the
second time is a lot better
than the first time.
So we did talk to them.
But we would have had to operate
their technology in
the cloud ourselves.
They didn't really have
a cloud story.
So SafeNet was very unique
and had a cloud story.
We were able to try the thing
very quickly and prove to
ourselves that it worked.
And we didn't really do a big
extensive evaluation of very
many other solutions.
As I said, we did a little bit
a proof of concept testing
around the model, like writing
some Java apps to just inject
usage, and test for features,
and that sort of thing.
Again, as I mentioned earlier,
we had a little trouble
wrapping our head around the
model, the entitlements, and
the products, and features, and
all that sort of stuff.
But eventually, we
got through that.
We got some very good support
help from our systems
engineering team at SafeNet.
Then we bought the product.
We specifically procured
a Sentinel cloud in the
beginning of January
this year.
So January, 2012 we bought.
We then underwent an
effort to alter our
products to support SafeNet.
So we had to count usage
where we didn't before.
We had to snap SafeNet
API into our product.
Here we did something that many
of you may or may not do
if you did this.
We actually implemented another
layer in between our
core product and SafeNet like
as a little web service.
We did that for two reasons.
When we first started
with SafeNet, they
didn't have a CAPI.
One finally appeared kind of in
the process of us buying.
But we decided we were going
to stick with Java.
And so we wrote a little Java
sort of special purpose web
service that talks JSON web
service on one side and
SafeNet and the other side.
Frankly, we did that so that
just in case stuff happens,
just in case we decided the
SafeNet thing wasn't going to
work, we wouldn't have been
using them like at the core of
our products.
So this abstraction gave
us a little bit of
defense against that.
It's never been an issue.
And it's actually nice having
this abstraction layer now
because it's very easy for us to
diagnose any issues we have
because it has its own
separate logging
and things like that.
So it's been kind of
handy to do that.
Anyway, we had to do that.
We had to interface the protocol
in our product to
count things.
It already counted a lot of
things, but counted and report
in a way that we could feed it
into this other application.
So that involved an engineer in
Europe that wrote the Java
app and then some people who
worked on our core product
working on that part.
That went pretty quickly,
both parts of that.
But I have to say, we kind of
had fits and starts where we
were working, and then we
weren't working, and working,
and not working.
In parallel, that S3 componentry
that I showed you
earlier was being built.
And we were putting
the licensing
together in that product.
So those two things were
happening simultaneously.
We had to update our build
systems to build the AMIs
automatically that we were
providing to our customers.
I have to say, and I'll make
this comment later, we built
more AMIs than we needed to.
We could have built one and used
the feature acquisition
to have the AMI take on
different personalities
depending on those features.
But to simplify things,
we made multiple AMIs.
So we have four offerings
in the cloud
coming from three AMIs.
We're going to try to push
down the number of AMIs.
Every AMI you make, our QA team
needs to try it and run
through it a little bit.
So it's just overhead.
We built some tools.
Again, our first pass on this
was the tools that were built
in Ruby to create things like
customers, entitlements, and
contacts, and whatnot in
SafeNet for when we do
evaluations and when
we do sales.
So we use SafeNet and SafeNet
entitlements four evaluations.
A typical evaluation from us, we
make an entitlement for the
customer for 30 days.
We have a culture of evals
and time based
evals for a long time.
So Rob mentioned beginning
to eval with time
based evals was healthy.
We had been doing that for the
entirety of our company,
effectively.
We did some tools to do basic
usage collection.
I liked Rob's presentation
because they tackled the whole
thing at the front.
They tackled Salesforce
integration, financial system
integration, et cetera.
We are doing that now.
But we have deployed SafeNet and
these products even in the
absence of that.
So usage is collected.
And we're using just tooling
that periodically pulls the
usage and just kind of hands
it to finance sort of in a
spreadsheet form.
And then they do invoicing
from that.
We have a low enough
volume that that
works for us currently.
But you'll see later that
integrating to Salesforce,
integrating to Aria, things like
that is getting a lot of
our attention right now.
And then we released.
I thin the 1st of June, we had
our first customers on like a
beta of this.
And then on the 1st of July,
we released the Aspera on
demand version two
AMIs and this
functionality to our customer.
I'm not going to belabor this.
But I just wanted to give you
a quick example of the kinds
of ways we're using features
in SafeNet.
We could quibble about the
naming of some of these.
But let me start from the bottom
because that one we
can't quibble about.
The bottom one is the
usage features.
The only usage that we report
to our customers is total.
But we actually count in and out
separately mostly so that
we can show that data to a
customer to convince them that
we're not just making
up numbers.
If we have an input inbound and
outbound number that add
up to the total, we thought
that would be a little
optically better
for a customer.
But the plans are just
built on totals.
We have a time constraint
related feature.
Entitlements, and products,
and features, and all this
other stuff in SafeNet
all take time span.
So you can control those things
with a start date and
an end date.
But to simplify things, we made
our own feature called
Term, which has a start date
and an end date, either of
which we can manipulate
on the fly.
So if we want to stop somebody
from running the software, we
don't worry about revoking
entitlements, or changing
dates on features,
or this, or that.
We just do it in one place
on this Term feature.
So that's a very commonly
used one in our case.
Then at the top are the kind of
features that are closely
related to what everybody else
has been talking about, which
are features that control
components or
functions of the product.
So do we support mobile?
Do we support our
Connect plug-in?
Do we support the Faspex app?
These are sort of big, giant
yes, no knobs that control
what the product's going
to do and support.
And then the final one, which is
a little bit of an unusual
one perhaps is we're using some
of these features to pass
things that aren't really like
license items so much as they
are configuration.
So for example, from a legacy
point of view, our licenses
require a customer
number in them.
And this is handy for
our support people
so that they can--
when two Aspera customers can
communicate, we see the
customer number from each one.
We can figure out
who they are.
So we put the customer number in
SafeNet and feed it to our
product so that it dynamically
becomes
associated with that customer.
Same thing with bandwidth.
Same thing with some other
configuration parameters like
max users for a certain part
of the product, et cetera.
Yes?
Was there a--
OK, sure.
Do you provide a dashboard to
your customers so that they
can take advantage of this data,
this information that
you're getting here?
No, we don't.
The question was do we
provide a dashboard.
We don't currently.
But we know.
We know we need to.
And I don't know if we're
going to give them that
information out of SafeNet, or
we're going to give them that
information out of Aria because
the usage data is
duplicated over in our Aria.
But we know we have
to do that.
Yes, that's a good question.
Real quickly, here's kind of a
diagram of how this looks in
our software.
On the left side is the
componentry running in
SafeNet's world in Amazon.
So there's a Sentinel
cloud run time
running as Amazon instance.
And then there's the EMS system,
which applies to both
the cloud product and the
on premise product.
And we have the product manager
here somewhere I think.
Hi there.
So for acquisition of features,
we use an API from
SafeNet called Get Info.
It's really as simple as that.
You connect to SafeNet.
You issue Get Info with
an entitlement ID.
And you get back the details
about that entitlement by
feature, their dates,
whether on or off,
et cetera, et cetera.
And you can use that to further
make determinations.
This component at the top you
see called ALEE, that's the
Java process that we wrote
to kind of like hide
SafeNet from our app.
So we use a simple JSON
interface to talk to that.
We have a component in our
product running all the time
called Node.
And Node let's us put little
like microtasks
in there to do work.
And one of the microtasks that's
in there is something
to report every five minutes
usage and every five minutes
query features.
So if we turn a feature on or
off, five minutes later our
customers are able to
use it without any
changes to their product.
And then our apps, and our
clients, and whatnot, our SDK
users, et cetera access through
a license library
something that looks a lot
like our classic license.
Remember I mentioned earlier we
have a signed XML license
representation.
Basically, the license library
takes the feature information
that comes from SafeNet and
a template sort of blank
generalized license that we ship
with every AMI, which is
the same on every one, merges
them together to form this
kind of legacy artifact
that our product's
kind of used to seeing.
Usage happens kind of in
a reverse direction.
The apps and clients write their
usage to a key value
data store that's done in a
secure way, semi secure way.
And every five minutes, the node
process reports that to
ALEE, which uses the two API
calls log in and log out and
the SDK to report the usage.
OK, the good, the bad,
challenges, issues, problems,
et cetera, and a little bit
about what we learned.
So on the good side, the product
really did what it was
supposed to.
It has been very reliable.
Haven't really had any sort of
operational issues with this
product at all.
As Rob pointed out, dealing
with SafeNet--
and I think Dave from NetApp
kind of pointed out they were
a friendly company
to deal with.
I kind of found dealing with
SafeNet to be very simple.
I got great SE support,
in particular.
And the documentation was very
strong, especially from a
reference point of view.
I've pointed out to product
management I might like to see
a little bit more like
best practices and
whatnot from the doc.
But really good reference doc.
They keep it updated.
And that's been solid.
Some of the issues
that we had.
There wasn't a CAPI.
There is now.
And it didn't even
end up mattering
because we used Java.
So that's almost not fair
to put that as a ding.
What I call calibration
of tech support.
By this I mean--
I'm a software vendor.
I call tech support from another
software vendor.
Now, maybe you guys
are different and
just scream at them.
But I like, as a software
vendor, to kind of play nicely
with other software vendors
support organizations.
So we had a little bit of
dancing around about how fast
am I going to hear
back from you?
Can I count on you really
telling this to engineering?
There's a little
bit of learning
slash calibration there.
And we got to learn how we
worked and how they worked.
And once that calibration was
done, I've been very satisfied
with support.
Internationalization.
I think some of this is coming
in the 2.2 release that I just
got on my sandbox.
But there were some sort of
wildly goofy restrictions on
things like company names.
We got a lot of company names
that have umlauts and crazy
characters in them, et cetera.
So I was kind of stunned at some
of these restrictions.
But they're working
to improve those.
So that was a little
problematic.
I actually assumed from this
documentation that I got that
I could do full provisioning
through the EMS API.
And when I actually tried to
provision one of my particular
modeled entitlements, which
had optional features, et
cetera, I was unable to
do certain things.
So that required a conversation
with engineering.
But that's just become
available for me.
So like every challenge, they
have their other side, too,
which is it's an opportunity.
And SafeNet responded
well to that one.
And then there are some strange
limits on some fields.
Like I don't understand
fields that have
limits like 10 million.
I mean, I'm a programmer.
So I'm used to fields having
limits like 213 million
whatever, 31 [? bits, ?]
or things like that.
So anyway, there are some
strange limits that every so
often I'd bump into.
And I just ran into one the
other day that my maximum
bandwidth is 10 million.
So I said, what if my
maximum bandwidth is
10 million and one.
And that didn't work.
So we're going to have a
conversation about that.
But there were some
odd limits.
Some of our own pain and kind
of what we learned.
Not all of our apps are
leveraging SafeNet yet for
this feature determination.
So we still have to ship a
couple of legacy XML licenses
in our product for some of our
other products that we put
together on the same bundle.
So we need more of our products
to leverage SafeNet.
And we're doing that.
I mentioned this before,
we built more AMIs
than we should have.
We should have used the feature
determination to build
like one AMI that kind of morphs
itself depending on the
features that we get.
And I think we did that
for expediency.
We had a little bit of issues
with the number of moving
parts we have.
We have the node.
We have the ALEE.
We have the SafeNet
environment.
And when we were stopping
and starting
things, we had some problems.
We had to go back and do a
little bit of re-engineering
about the stop, start process
to make that work.
Should have paid more attention
to that up front.
And I mentioned this as well.
I looked at all this stock and
roughly assumed I could make
the entitlement the
way I needed to.
And then when push came
to shove, I couldn't.
And we had to get some help
in a new version to
be able to do that.
Last slide.
So where are we going next?
I admired Rob's presentation
because he's already knocked
off this first one, which
is integration to
his back end systems.
And he also mentioned he used
SafeNet's consulting services,
which I didn't.
So maybe those are related.
And I think I would almost put
another lesson on there--
I think engaging with consulting
might have helped
me in a few areas.
Understanding a model better,
maybe understanding how we
might interface with Salesforce
better, et cetera.
I might reconsider that.
So more of our components to
support the dynamic features
from SafeNet.
I mentioned we still had
apps not doing this.
So we need to deal with that.
And we are.
More cloud providers.
Right now, we run only in
Amazon, which happens to be
where SafeNet runs.
But we're going to support
Azure as well.
We're working on
that actively.
And we had a little wrinkle
thrown at us recently where
we're doing that kind of
closely with Microsoft.
And they said, if you're going
to use our marketplace to buy,
you need to use our metering
service to meter.
It was kind of ugh, oh well.
Whatever you say.
And then, they had a sudden
change of heart about their
metering services.
So just last week, we decided
we're switching our metering
in the Azure offering over
to SafeNet as well
because it's simple.
We know how to do it.
And SafeNet hasn't yet told us
that that metering service is
going away.
So the Azure product's going
to be a little different.
The Amazon product--
recall that a customer ended up
buying like an instance of
our product.
And the Azure case, it's
going to be more of a
software as a service.
So I'm not going to be confused
about what that is.
The customer is going to come
in as a subscriber and buy a
certain amount of transfer
capability.
And we're actually going to
run the infrastructure.
So we're moving to more of an
entitlement for individual
customers to do transfers in an
SASS capability as opposed
to like buying an instance.
And that's going
to be exciting.
I think that's where we really
need to be going.
So the fourth item
relates to that.
We're going to be able to count
not just on the whole of
our product, but down at the
individual user level.
And finally, we're going to put
a web-based storefront.
The marketing organization is
starting to do some user
interface designs for a
web-based storefront so that
customers can buy this stuff
from us directly.
And we'll hook all that stuff
up together the way Rob's
organization has.
You can learn more about
our cloud stuff at
cloud.asperasoft.com.
And there's my contact
information, hopefully.
Did it make it?
No, it didn't make it.
This used to be a Keynote
presentation and is now a
PowerPoint presentation.
So when you saw some upper and
lower case finagling going on,
that was happening in the
conversion process.
But anybody that wants to
contact me, I'll provide you
with a card or my
email address.
And I'm happy to take questions
now, or in our panel
session, or offline,
or over email.

Chris Markle from Aspera presents "Supplementing Your Core Business With a Services-Based Product Offering" at LicensingLive! 2012 in Cupertino. Chris is Aspera's VP of Engineering Services.

Post a Comment

Comment

Name

Email Address

We reserve the right to delete any comments that we feel are disruptive.