[Editor: I suppose it was inevitable that cloud computing would drive the need for queueing and ESB-type services in the cloud. This has interesting implications for the transition of current distributed architectures entirely to the cloud.]
cloudMQ is a way to start exploring integration of messaging into applications since no installation or configuration is necessary.
If you are looking for:
• Cross-platform integration for your enterprise
• On-demand , real-time Business-to-Business information exchange
• Real-time Business Intelligence
• Complex Event Processing
cloudMQ provides these benefits and more…
PerformancecloudMQ has the capacity to hold virtually unlimited number of messages and support thousands of clients.
Unlike Amazon’s SQS server, cloudMQ provides all of the enterprise messaging features such as message order preservation, single-phase and two-phase transactions and unlimited message sizes.
Reliability
Using Amazon EC2 compute cloud and S3 storage we have created state of the art AMQP messaging backbone that spans into thousands of messaging instances.

It's a good idea so long as someone is happy with Amazon managing all the infrastructure and are happy security wise about the data being up there. I think we'll see more and more of this type of managed middleware services hosting in ec2 or similar cloud farms. Latency should be <200ms maybe. You won't see front office trading systems there but everything should be fine and I'd imagine throughput for non single thread applications should be fine also.

Latency should be <200ms maybe. You won't see front office trading systems there but everything should be fine and I'd imagine throughput for non single thread applications should be fine also.</blockquote>
Let's say that the latency is 100 milliseconds. We have triggers on tables that write to queues. Imagine a batch update with 100,000 records (pretty common).
That's 100,000 * 0.1 seconds = 10,000 seconds = 167 minutes added to the batch. That's more than 2 and a half hours added to a single batch.
Cloud computing has some very big advantages but I think people are not thinking critically about some of this stuff. It makes sense to have a queuing infrastructure in the cloud for cloud hosted services and for distributed delivery across the internet. But people, come on, there's a huge disadvantage to the cloud: latency and until someone comes up with a way to eliminate it (i.e. send messages faster than the speed of light) that isn't going away.

While in some instances batch might be needed, in most it is not. It exists because people don't realize is 2009.
I have some use cases in mind where batch could go away if we could use something like cloudMQ.

While in some instances batch might be needed, in most it is not. It exists because people don't realize is 2009.

I have some use cases in mind where batch could go away if we could use something like cloudMQ.

You can rationalize it all you want the latency is still there. Perhaps you can direct me to your favorite magic-wand retailer. I need one that makes everything 'the way it should be'. My problems have to do with this thing called 'reality'. My job would be so much easier if imagining something were the same as it being real.
http://en.wikipedia.org/wiki/Fallacies_of_Distributed_Computing

While in some instances batch might be needed, in most it is not. It exists because people don't realize is 2009.

I have some use cases in mind where batch could go away if we could use something like cloudMQ.

You can rationalize it all you want the latency is still there. Perhaps you can direct me to your favorite magic-wand retailer. I need one that makes everything 'the way it should be'. My problems have to do with this thing called 'reality'. My job would be so much easier if imagining something were the same as it being real.

The ability for any messaging framework to fire messages quickly will be decimated if the network is slow and this is because the overall speed of the system is determined by the weakest link in the architecture.
A latency of 100ms is incredibly high and would be unacceptible for anything other than a toy application.

First, I wasn't saying it should be used for everything. But it could useful for somethings. Your example of batch was ... well, having batch, period, is a problem in and of itself.

You can rationalize it all you want the latency is still there

I wasn't. But holding ALL transactions up till EOD or EOM because you can only send things via FTP is better? Even with the latency, it might be overall faster than "hold and send a pile". And the data in the pile is NEVER bad. :0 The sooner i get some data, the sooner i can tell the sender that it is crap.

Perhaps you can direct me to your favorite magic-wand retailer

My wife has a magic wand. She tells me what to do and hits me with it and TA-DA I do it. I will ask her where she got it.
We used to have someone here who had a crystal ball. :)

I need one that makes everything 'the way it should be'. My problems have to do with this thing called 'reality'.

I wasn't. But holding ALL transactions up till EOD or EOM because you can only send things via FTP is better? Even with the latency, it might be overall faster than "hold and send a pile". And the data in the pile is NEVER bad. :0 The sooner i get some data, the sooner i can tell the sender that it is crap.

You don't have to convince me but if my answer to batching is a 100 millisecond latency per event, it's going to be hard to convince people that it's an improvement.

I need one that makes everything 'the way it should be'. My problems have to do with this thing called 'reality'.

It doesn't make sense to rewrite a bunch of stuff that works just fine in order to make it possible to use an immature and unproven approach no matter how fabulously hypely-advertastic it is. Remember when EJBs were going to make us all shit golden eggs?

I wasn't. But holding ALL transactions up till EOD or EOM because you can only send things via FTP is better? Even with the latency, it might be overall faster than "hold and send a pile". And the data in the pile is NEVER bad. :0 The sooner i get some data, the sooner i can tell the sender that it is crap.

You don't have to convince me but if my answer to batching is a 100 millisecond latency per event, it's going to be hard to convince people that it's an improvement.

I need one that makes everything 'the way it should be'. My problems have to do with this thing called 'reality'.

It doesn't make sense to rewrite a bunch of stuff that works just fine in order to make it possible to use an immature and unproven approach no matter how fabulously hypely-advertastic it is. Remember when EJBs were going to make us all shit golden eggs?

Valid points.
The areas I am thinking I could use this stuff is places they won't understand what you just said anyway. :) People who when I ask then to send me XML instead of comma delimted files send me 1,,1,1,1 LOL. (this did happen)

Lets suppose we have an ecommerce application that is using this to process online orders. Click order and then they use this or something like Amazon SQS to hold the order contents for another system to pick it up and process it. A 100ms isn't so bad here is it?, lets not paint the world a single color because latency is bad for batch. It's all about the use case. If latency was all that matters this whole web thing would never have taken off...

Lets suppose we have an ecommerce application that is using this to process online orders. Click order and then they use this or something like Amazon SQS to hold the order contents for another system to pick it up and process it. A 100ms isn't so bad here is it?, lets not paint the world a single color because latency is bad for batch. It's all about the use case. If latency was all that matters this whole web thing would never have taken off...

I'm not arguing that this can't be useful. What doesn't make sense is to consider this as a complete replacement for local (as in LAN) based queuing.
The example I gave for batch processing is not the only one that has issues with latency, it's just easy to explain.
And moreover, latency isn't the only issue. If you need guaranteed delivery, what happens if the cloud goes down? Or, what happens if you are unable to get to the cloud for any reason along the way? Your entire enterprise will grind to a halt. Contrast that with something like MQSeries (a.k.a Websphere MQ) where you can have local managers that will hold onto a message until the destination can receive it.
The cloud is undoubtedly going to change the world of computing but it's not going to make local computing obsolete. More realistically, I think we'll see a world where programs can run locally and on the cloud and move around from local systems to different clouds and back seamlessly. If you are putting all your eggs in Amazon's basket and not considering contingencies and mitigating the risks of vendor lock-in, you are just repeating the mistakes of the past. Of course, no one seems to have any accountability in IT so being ignorant of the past often is rewarded.

... people, come on, there's a huge disadvantage to the cloud: latency and until someone comes up with a way to eliminate it (i.e. send messages faster than the speed of light) that isn't going away.

Come on indeed, latency is a problem if you need single round-trip request-response in under x milli-seconds but in the vast, I mean VAST majority of cases these are parallelized such that 1000 requests are sent in one second and a 1000 responses are received 100ms (respectively) later. Total throughput is the same as it would be with the same sized "pipe" sitting on the same subnet.
Latency delays serialised non-parallelizable or dependent messages and they are very rare in real life.
I'm currently using EC2+EBS+S3 to store and process several tens of tera-bytes of exchange data (i.e. stock exchange) in "psudo" real-time, we're only hundreds of milliseconds behind the co-located machines on the exchange and far more up to date and accurate than the classic financial feed services. We can then simulate, test or back-play data from most of the major exchanges from any period in the past few months.
FIX 5 over AMQP is a serious option for re-distributing the data so I will be taking a look at this.
-John-
Incept5

... people, come on, there's a huge disadvantage to the cloud: latency and until someone comes up with a way to eliminate it (i.e. send messages faster than the speed of light) that isn't going away.

Come on indeed, latency is a problem if you need single round-trip request-response in under x milli-seconds but in the vast, I mean VAST majority of cases these are parallelized such that 1000 requests are sent in one second and a 1000 responses are received 100ms (respectively) later. Total throughput is the same as it would be with the same sized "pipe" sitting on the same subnet.

Of course you could rewrite millions of lines of COBOL to be multithreaded. The question is should you do it just so you can be on the cloud bandwagon?

Latency delays serialised non-parallelizable or dependent messages and they are very rare in real life.

So the latency could be anything, say 10 seconds. An hour! It doesn't matter right?

Of course you could rewrite millions of lines of COBOL to be multithreaded. The question is should you do it just so you can be on the cloud bandwagon?

"Clouds" are here to stay and they're going to play a big part in the future of IT, if you're still in COBOL land then it doesn't look like you're in to keeping up with technology so why bother with clouds, just wait for the next thing and slate that too.

So the latency could be anything, say 10 seconds. An hour! It doesn't matter right?

The post (snail-mail) takes days, they still manage to send millions a day though.
-John-

Of course you could rewrite millions of lines of COBOL to be multithreaded. The question is should you do it just so you can be on the cloud bandwagon?

"Clouds" are here to stay and they're going to play a big part in the future of IT, if you're still in COBOL land then it doesn't look like you're in to keeping up with technology so why bother with clouds, just wait for the next thing and slate that too.

I have no problem with clouds and I'm not in COBOL land, I'm building the bridge out of COBOL land. What I have a problem with is people trying to push a technology by glossing over the downsides.
I think clouds will become part of the future of IT. I don't think they are the future of IT. If you'd actually read my posts in this thread, you'd already know that. In the real world (as in not the financial industry) companies have to work hard to earn money and they can't just rewrite everything because they see a shiny new toy.

So the latency could be anything, say 10 seconds. An hour! It doesn't matter right?

The post (snail-mail) takes days, they still manage to send millions a day though.

I think clouds will become part of the future of IT. I don't think they are the future of IT.

I totally agree on that.

In the real world (as in not the financial industry) companies have to work hard to earn money and they can't just rewrite everything because they see a shiny new toy.

Again, agreed but I'm sure you agree someone has to innovate and speculate, the financial services industry often does than and the "real world" then benefits form the good bits and rarely has to suffer the mistakes. If cloud computing doesn't have a silver lining then you won't have to bother with it, we (the financial services industry) will take the risk.

Why are we wasting our time with these silly computers?

It saves having to find a pen, paper, envelope, stamp and postbox, it works for me :-)
-John-

Again, agreed but I'm sure you agree someone has to innovate and speculate, the financial services industry often does than and the "real world" then benefits form the good bits and rarely has to suffer the mistakes. If cloud computing doesn't have a silver lining then you won't have to bother with it, we (the financial services industry) will take the risk.

First, sorry for the attack. I was having a really bad night.
Anyway, when it comes down to it, we really aren't far off from each other.
What I don't understand is why my pointing out that there's an inherent latency in cloud based transactions is treated as some sort of partisan stance against the cloud in general.
My company is suffering pretty significantly from getting on the SaaS bandwagon without considering all the implications. While the cloud isn't the same thing as SaaS, it's got a lot of the same problems. When we went with this vendor, everyone (especially the vendor) kept telling me that it was going to be the best thing ever. the reality is that it's one of the worst things we've ever done. I know everyone laughs at COBOL and I do too sometimes but the reality is that a lot of COBOL applications work well and the Saas shit that we bought doesn't, at least at the level of quality we require. The world runs on COBOL. The cloud is a blip on the radar, an upstart. I think the cloud will be an important part of IT in the future, I know COBOL will be. Not because we want it too but because it's not going away.
All I am really trying to say is that you should consider how this added latency will affect your systems. You might decide to go with it understanding that you will may have to design things a little differently e.g. add more parallel processing and therefore do a lot more testing.

There are several flaws in this reasoning. Message size will determine latency. Currently, it takes 25 milliseconds on average to place 1K messages and 45 milliseconds for 100K message this is from my laptop at home via fios to ec2. So let's take that 167 minutes and say it would be 80 minutes. What is 80 minutes? What do we compare it with? 80 minutes is the time it took write all those messages to the queue. Assuming that reading messages will take slightly longer then writing because you will be performing some additional function to process the message, queue buffer space will be utilized more than 90% of the time. You will have a backlog, at least according to the queuing theory. Here is a demo.
http://www.dcs.ed.ac.uk/home/jeh/Simjava/queueing/mm1_q/mm1_q.html
The real question is how much faster would your application be able to process the messages had they been sitting on a LAN queue. I am not sure what the latency on your LAN is.
So three points:
1. Message size matters
2. Write related latency does not matter most of the time because in most cases you will have backlog anyways.
3. cloud contributed delay is get throughput from LAN - get throughput from ec2 assuming the consumer does not run in the cloud.

Latency should be <200ms maybe. You won't see front office trading systems there but everything should be fine and I'd imagine throughput for non single thread applications should be fine also.

Let's say that the latency is 100 milliseconds. We have triggers on tables that write to queues. Imagine a batch update with 100,000 records (pretty common).

That's 100,000 * 0.1 seconds = 10,000 seconds = 167 minutes added to the batch. That's more than 2 and a half hours added to a single batch.

Cloud computing has some very big advantages but I think people are not thinking critically about some of this stuff. It makes sense to have a queuing infrastructure in the cloud for cloud hosted services and for distributed delivery across the internet. But people, come on, there's a huge disadvantage to the cloud: latency and until someone comes up with a way to eliminate it (i.e. send messages faster than the speed of light) that isn't going away.

There are several flaws in this reasoning. Message size will determine latency. Currently, it takes 25 milliseconds on average to place 1K messages and 45 milliseconds for 100K message this is from my laptop at home via fios to ec2. So let's take that 167 minutes and say it would be 80 minutes. What is 80 minutes? What do we compare it with?

In a real world situation, the entire job executes well under 60 minutes now. And realize this is just one job of many.

80 minutes is the time it took write all those messages to the queue. Assuming that reading messages will take slightly longer then writing because you will be performing some additional function to process the message, queue buffer space will be utilized more than 90% of the time. You will have a backlog, at least according to the queuing theory. ...

A back log is fine. That's kind of the point of queueing. The processing of the events are not in the critical path. The key is to get the event on the queue and back to processing as fast as possible, in the scenarios I have been involved in (which includes B2B at one of the highest volume wholesalers in the world).

I am not sure what the latency on your LAN is.

The latency in question is local to the machine and I've seen (while debugging issues) that it is often under a millisecond.

So three points:1. Message size matters

And how does that relate to the question at hand other than "unlimited messaged size" -> unlimited potential latency.

2. Write related latency does not matter most of the time because in most cases you will have backlog anyways.

3. cloud contributed delay is get throughput from LAN - get throughput from ec2 assuming the consumer does not run in the cloud.

As I already stated, this makes perfect sense if you are running in the cloud. I'm not talking about that. I;m talking about the notion that cloud based MQ can be a wholesale replacement for local/LAN based queuing.

If you limit the defition of messaging strictly to inter-process queue based communications, your criticism is absolutely correct. When I think of MOM, I think distributed messaging, different platforms and network protocols. I think about examples such as Blue Exchange network, Transportation management system, hospitality integration, social network updates. I think this
"Message-oriented middleware (MOM) is a client/server infrastructure that increases the interoperability, portability, and flexibility of an application by allowing the application to be distributed over multiple heterogeneous platforms. It reduces the complexity of developing applications that span multiple operating systems and network protocols by insulating the application developer from the details of the various operating system and network interfaces. APIs that extend across diverse platforms and networks are typically provided by the MOM."
You limit the scope of messaging to this.
http://en.wikipedia.org/wiki/Message_passing
And in that case sure, no distributed messaging will ever make sense, moreover cloud-based.

And in that case sure, no distributed messaging will ever make sense, moreover cloud-based.

Based on this, I don't think you understand my point at all. As a practitioner, I'm quite familiar with MOM. When you send a message to a queue and require guaranteed delivery, you must wait for a response before continuing on. The queue cannot guarantee the delivery of a message it does not receive. This has to be handled locally at some level. Generally this means blocking. You can use some sort of internal queuing but what happens when if the app crashes? Ultimately you need to write the message to some sort of persistent storage or wait for a response from the queuing system.

Latency should be <200ms maybe. You won't see front office trading systems there but everything should be fine and I'd imagine throughput for non single thread applications should be fine also.</blockquote>

Let's say that the latency is 100 milliseconds. We have triggers on tables that write to queues. Imagine a batch update with 100,000 records (pretty common).

That's 100,000 * 0.1 seconds = 10,000 seconds = 167 minutes added to the batch. That's more than 2 and a half hours added to a single batch.

What is this, application design 101's assignment "give an example of as crappy a design as possible"?

Latency should be <200ms maybe. You won't see front office trading systems there but everything should be fine and I'd imagine throughput for non single thread applications should be fine also.</blockquote>

Let's say that the latency is 100 milliseconds. We have triggers on tables that write to queues. Imagine a batch update with 100,000 records (pretty common).

That's 100,000 * 0.1 seconds = 10,000 seconds = 167 minutes added to the batch. That's more than 2 and a half hours added to a single batch.

What is this, application design 101's assignment "give an example of as crappy a design as possible"?

It's an example of how the vast majority of business applications work today and how you can tie them into a MOM architecture without rewriting a line of code.

What about Business to Business exchanges? I've done a couple of B2B projects where Event Driven Architecture would have been appropriate , but we had to go with kludge Web Services mechanism...
Imagine that various businesses exchange and react to real-time events on topics... Even government agencies like Motor Vehicle, etc...
Opens up a lot of possibilities for value-add stuff like Complex Event Processing and real-time Business Intelligence...
Thoughts?

What about Business to Business exchanges? I've done a couple of B2B projects where Event Driven Architecture would have been appropriate , but we had to go with kludge Web Services mechanism...

I was actually thinking that is exactly where this kind of thing might make sense but I don't really understand why webservices caused you problems. A web service hosted on the web is a web service. If that service is a queue, its still a web service.
I've built and maintained webservices that merely wrote to a queue and returned a confirmation of receipt. Sometimes I think that the problem with web services have more to do with people's assumptions about them than any real issues. Web services receive messages and respond to them, often with trivial confirmations. How's that different from how cloud hosted queue would work?

What about Business to Business exchanges? I've done a couple of B2B projects where Event Driven Architecture would have been appropriate , but we had to go with kludge Web Services mechanism...

I was actually thinking that is exactly where this kind of thing might make sense but I don't really understand why webservices caused you problems. A web service hosted on the web is a web service. If that service is a queue, its still a web service.

I've built and maintained webservices that merely wrote to a queue and returned a confirmation of receipt. Sometimes I think that the problem with web services have more to do with people's assumptions about them than any real issues. Web services receive messages and respond to them, often with trivial confirmations. How's that different from how cloud hosted queue would work?

What about Business to Business exchanges? I've done a couple of B2B projects where Event Driven Architecture would have been appropriate , but we had to go with kludge Web Services mechanism...

I was actually thinking that is exactly where this kind of thing might make sense but I don't really understand why webservices caused you problems. A web service hosted on the web is a web service. If that service is a queue, its still a web service.

I've built and maintained webservices that merely wrote to a queue and returned a confirmation of receipt. Sometimes I think that the problem with web services have more to do with people's assumptions about them than any real issues. Web services receive messages and respond to them, often with trivial confirmations. How's that different from how cloud hosted queue would work?

It was not Web Service technology that was the problem, but the request-reply paradigm, which necessitated polling for changes...
In general this type of pattern should be handled by publish-subscribe architecture, which is usually done through message queuing engines such as cloudMQ...
Pub-sub allows for a lot more extensibility in your architecture...

It was not Web Service technology that was the problem, but the request-reply paradigm, which necessitated polling for changes... In general this type of pattern should be handled by publish-subscribe architecture, which is usually done through message queuing engines such as cloudMQ...Pub-sub allows for a lot more extensibility in your architecture...

If I'm not mistaken, pub-sub is generally implemented using polling. When I worked on this kind of thing, our customers and vendors would have their own web serivces that would receive updates. No polling. Less sophisticated customers would use polling, though. I guess with pub-sub you don't have to think about the polling that happens. I would say that pub-sub is less well understood than web services and queueing, though.
Again, this kind of thing could be a really great addition to a lot of architectures but if you think this is going to magically make your job easy, you are sorely mistaken. The reality is that you are taking on a lot of risk by moving things to the cloud right now. We are squarely in the hype phase of cloud computing and Saas. The backlash has barely started.

I also would be interested in difference between SQL and cloudMQ...
From what I read Amazon's SQS does not support Enterprise Messaging features such as message grouping and sequencing, XA transactions and other facilities that JMS supports...

Okay so nobody bit on my posting about whats in couldmq.jar but got all distracted by latency or maybe didn't realise what Jose and I were eluding to.
FWIW yes the cloud will incur latency as its the internet and latency QoS does not apply, doh, no news there and no it's unlikely you'd outsource such a key bit of infrastructure for intra-company use. B2B however is different and having a guaranteed messaging backbone to link your customers to you over the internet has obvious benefit for building on. As for security well an extra few milliseconds for decent encryption is not so hard.
I've wandered around the website for a few minutes, downloaded the cloundmq.jar and found it full of WebSphereMQ and RMM client jars. Where is the AMQP there? The guide on connecting talks about a cloudmq JMS provider that support AMQP. No sign of it in the client jar.
Maybe I should sign up and see. Are there web pages to do all the JMS management stuff? Browse queues, organise durable subscriptions? Manage JMS logins and so on? It would be nice to see a bit more on the web.
FreedomOSS seem to peddle services over ActiveMQ (no AMQP there either), ServiceMix and a few other freely licensed open source projects.
My experience tell me something smells a bit iffy on this one.

FWIW yes the cloud will incur latency as its the internet and latency QoS does not apply, doh, no news there and no it's unlikely you'd outsource such a key bit of infrastructure for intra-company use.

Why is it unlikely? Because it's foolish? If you believe that, you don't know enough people. And it's specifically one of the things that's recommended by the article, is it not? I think the PT Barnum quote goes: "there's a sucker born every minute."

B2B however is different and having a guaranteed messaging backbone to link your customers to you over the internet has obvious benefit for building on. As for security well an extra few milliseconds for decent encryption is not so hard.

Even for B2B, what happens if you temporarily can't reach the queues? Does it block? If not, how does it guarantee delivery? I'm not saying that these issues can't be solved, just that they can't be solved on the cloud.

FWIW yes the cloud will incur latency as its the internet and latency QoS does not apply, doh, no news there and no it's unlikely you'd outsource such a key bit of infrastructure for intra-company use.

Why is it unlikely? Because it's foolish? If you believe that, you don't know enough people. And it's specifically one of the things that's recommended by the article, is it not? I think the PT Barnum quote goes: "there's a sucker born every minute."

B2B however is different and having a guaranteed messaging backbone to link your customers to you over the internet has obvious benefit for building on. As for security well an extra few milliseconds for decent encryption is not so hard.

Even for B2B, what happens if you temporarily can't reach the queues? Does it block? If not, how does it guarantee delivery? I'm not saying that these issues can't be solved, just that they can't be solved on the cloud.

I just said unlikely, for some reason you used the word foolish. My comments are based on my experience of quite a few years in the field. I have no axe to grind or product to sell but instead I am at the buyer end of middle ware and have to make this stuff work for clients. The complexities of out sourcing key services is huge and requires very well defined processes for management. Remember I am talking about intra-organisation messaging here. B2B is where the interesting action will be in the next few years for higher level network services, probably backed by a telco who can provide a service level above their private network.
All the products I have used in production and for sure AMQP will handle a break between a client and the server just fine, assuming that is what you mean by not being able to reach a queue (all the major products failure detect in an atomically secure way in sync with your messaging with the right flags set).
You get to deal with the issue in various ways that suit your design what with sync points, JMS transactions, idempotent receivers and even XA should you feel the need. The fact that the service is on the cloud is not relevant, you have to restrict the patterns you use to deal with the longer latency, lower throughput and so on.
There is no reason you cannot extend a guaranteed messaging over the unpredictable cloud - it will still be as guaranteed delivery as you engineer and deploy the software you use for the service. The so called cloud is just what we've been doing for a long time in distributed systems packaged up in an easier to consume and manage fashion. Latency will have a massive distribution tail over a public network and you'll get heaps of duplicates when things go wrong in the cloud but functionally there is no reason why it won't work.

Okay so nobody bit on my posting about whats in couldmq.jar but got all distracted by latency or maybe didn't realise what Jose and I were eluding to. I've wandered around the website for a few minutes, downloaded the cloundmq.jar and found it full of WebSphereMQ and RMM client jars. Where is the AMQP there?

Are there web pages to do all the JMS management stuff? Browse queues, organise durable subscriptions? Manage JMS logins and so on?

cloudMQ does come with easy to use flex based gui interface that helps them to provision various messaging resources.

FreedomOSS seem to peddle services over ActiveMQ (no AMQP there either), ServiceMix and a few other freely licensed open source projects.

My experience tell me something smells a bit iffy on this one.

Freedom Open Source Solutions is a successful organization that employes several hundred persons. Freedom facilitated adoption of open source in industries such as public sector and healthcare who have traditionally relied exclusively on proprietary technology. Our strategy is to provide customers with compelling technology to build competitive advantage.

Hi Mikhail,
Are there any resources explaining how the backbone of your messaging network is engineered? It is far from transparent what is inside your cloud. I was surprised from reading your website what with its claims of an AMQP network to see a WAS MQ JMS download in cloudmq.jar.
Sorry I did not mean to show any level of disrespect, rather I found your website did not add up, there are no names of employees or your board or investors or even a nice techie blog; Google only has press releases and I don't see you sponsoring work in the open source world you freely use and so with several hundred employees you definitely keep things close to your chest.
Regards,
Colin.

Surely latency for Batch type message exchanges wouldn't be too much of a problem?
It's possbile the message transport layer would utilise negative ack's and batch multiple logical messages in a single network transfer.
Bandwith becomes more important than latency in this case.
Is it possible to use negative acks on a WAN rather than a LAN using multicast/broadcast?

TechTarget provides technology professionals with the information they need to perform their jobs - from developing strategy, to making cost-effective purchase decisions and managing their organizations technology projects - with its network of technology-specific websites, events and online magazines.