Friday, November 11, 2011

I have been wanting to visit HackJam for a long time. I finally got off my lazy bum and went for the Eagle crash course, generously hosted by Alex Hornstein, a visitor from the US.

It was a excellent crash course. Good thing I did paid some attention back in school during electronic classes, such that I was able to follow most of the theories and basic circuit designs. What you don't get from school is the linkage to the real world and this is what Alex's crash course offered, i.e.

How to get the circuit design and have it laid out

How to output it into gerber format which PCB maker can understand have your board made

He shared his experience with the low cost PCB prototype board maker in Hong Kong (just down the street from where I work!)

We went from zero to knowing how to design and economically make a prototype board in few hours!

Another interesting part of this visit is meeting Jon again. Not only is he working on Boot.hk, and Makible.com, he is also at HackJam! Jon, do you somehow bend the law of physics and have a 48 hour day??

I checked out his little work-in-progress Make-a-Bot like machine and it looks like he is almost there!

I have heard of 3D printers, but I have always think of them as ridiculously expensive. Boy! How fast have technology moved. Apparently today, it is possible to have a 3D printer for under 1,000 USD!

It was interesting listening to Alex sharing his vision on manufacturing goods as a SOHO/lifestyle business. In fact, he is well on his way. I saw his video of an electronic toy which he designed and manufactured. So, he is able to design his own circuit board, program his own firmware and print his own plastic. I don't think he is too far off!

Saturday, September 17, 2011

Singapore government heavy handed approach is definitely working in propelling Singapore as a leading destination for various enterprises to setup their data center. Who's in Singapore? A quick search on data center knowledge paint a fairly evident picture. Some brand name companies are IBM, Adobe, Amazon, OpenDNS, Salesforce.

I am glad to see Hong Kong government is trying to do the same. While visiting HKTDC's ICT Expo, I attended the "Canada - HK Innovation and Technology Partnering Forum". This is where it was first brought to my attention the Hong Kong government is finally trying to do something about the data center industry.

Recently, I came across this article on Computer World, which talks about Cloud Readiness in Asia. According to the "Asia Cloud Computer Association", in their whitepaper on Cloud Readiness, Hong Kong rank 2nd in the cloud readiness index, behind Japan and ahead of Singapore.

Hong Kong may rank higher in the cloud readiness survey, but I don't believe this is a true reflection of how the industry is shaping up to be. A simple search in Data Center Knowledge website and you will see all high-end cloud companies setting up shops in Singapore: Amazon, Salesforce, Adobe, etc. Whereas in Hong Kong, you only get companies from the lower-end of the spectrum, which are really just co-location companies such as NTT, Equinix, Rackspace. Why is this important? Because I believe it represent a greater commitment by companies to build a cloud service offering such as AWS, Salesforce, Omniture platforms then companies offering rental spaces for servers, which is what co-location really is. The former will require a more sophisticated workforce hence creating better job opportunities for the area.

Hong Kong may be "winning" in this so call "Cloud Readiness", but sophisticated cloud companies are choosing Singapore over Hong Kong.

Monday, September 12, 2011

I read quite a bit about GitHub and all the buzz around it, but it was only recently did I get to experience it. I was using the Java Wrapper Library for createsend for a web application which I am working on. I encounter the classloader issue which another user has observed. I quickly forked the createsend-java repository, fixed it, push it back to Git-hub and issue a pull request!

The next day, tobio, the maintainer of the main repository pull in the changes, and it is in the official branch. No more emailing around. Doing crazy diff commands to create patch. Emailing your patch to the maintainer, and hoping he has time to check and merge in the changes.

Bottom line, GitHub has removed a lot of friction in open source software development.

Without any doubt, it's merely an economical truth that free contributions and commercial enterprise interests are mainly incompatible, and of course, everyone needs to decide on their own on how to use and invest their resources.

I more or less agree with the above statement. After all, economical force is what drives a lot of activities in today's world.

HKE and CLP both provide this concept of "Maximum Demand Tariff" or MDT which has the potential to lower energy cost for high energy consumer. In the investigation summarized below, I focused on HKE because they provide a simpler MDT scheme (not necessary cheaper or better).

Hong Kong Electric provides an excellent explanation on how a customer will be billed if they opted for the "Maximum Demand Tariff"

As their example showed, opting for the Maximum Demand Tariff may not be beneficial for all customer. As HKE has pointed out on their website:

Maximum Demand Tariff consists of two parts, i.e. demand charge and energy charge. The demand charge is based on the maximum demand in kVA, while the energy charge depends on the energy consumption in Unit (kWh) of the month. Tariff charges are subject to a minimum of 100 kVA of the chargeable demand.
Maximum Demand Tariff is only beneficial to electrical installations with a high load factor. This applies to accounts of considerably high electricity consumption over a long period of time with a steady load. If these conditions are not satisfied, it is possible that the electricity charge could be higher under Maximum Demand Tariff than Commercial, Industrial & Miscellaneous Tariff.

Looking into the concept of Maximum Demand and the calculations, it will be most beneficial to customers with a more constant load over time and a high power factor. This is because the maximum demand charge, at least in the HKE's case, is calculated using the peak average apparent power in a 30 minutes period within the billing period (1 month).

The observation where a customer with a more constant load will benefit by using the Maximum Demand Tariff billing method is pretty straightforward. Since the peak usage is close to the average energy usage, the demand charge will be small relative to the energy charge.

Power factor plays a role in the calculation is due to the fact that Demand Charge in the "Maximum Demand Tariff" is measured in VA (Apparent Power) vs. W (Real Power) as in Energy Charge.

If we also make the assumption which typical loading factor is around 0.1 - 0.3 range, from the above plot we can see that we can reduce the energy consumption bill under the MDT scheme by:

Improving the load factor: spreading the load across time especially during peak energy consumption

Improving the power factor of the loading during the maximum demand period.

For the next step, it is probably worth quantifying the cost of improving the power factor of the entire loading of the energy consumer vs shifting loads from peak time to other times. Combined with the information in this post, we will be able to calculate the ROI on projects aimed at improving either the load factor and/or power factor.

Friday, August 12, 2011

IDC Solutions from Australia was generous in providing us with a prototype of their pMon power monitoring hardware. The solution has Modbus TCP communication protocol and I am glad to say I have successfully integrate their solution into our DC Insight offering.

Since InterMapper does not support Modbus TCP out-of-the-box, I have to venture into using a command-line base probe to integrate the two.

The intention of this blog post is to summarize the development process and list out the resource I used (which I can later reference again if needed :)

First off, it is studying the Modbus protocol. The Modbus Organization host the document - Modbus Application Protocl Specification v1.1b. After reviewing the specification to understand the low-level workings of the protocol, it is time to test and make sure the Modbus TCP functionality is indeed working on the device. It appears there were not as many open-source/free tools available to do this test compare to SNMP. In the end I found a free command-line tool, modpoll, and a simple GUI test client Simply Modbus (not free, but only US$60 for a single license). I must confess, I didn't buy Simply Modbus TCP, since I am comfortable with using the command-line. But for those who like a GUI, give Simply Modbus a try!

I also looked into other open-source projects such as jamod and openscada. I was unable to get jamod to work, and openscada is really an over-kill for what I am trying to achieve.

This should all be relative straight forward except the documentation from IDC for the base register was incorrect! The documentation claim the starting address is 401000, but in fact, it was just 1000. For those who know Modbus protocol, they should know right away something is wrong, since 401000 is greater than 65535, the 16-bit limitation on Modbus addressable space, but I was new and didn't know better.

With some trial and error, and some luck, I manage to find that the starting address is 1000.

In hindsight, it is now pretty clear why the writer of the document/manual might have thought it is 401000, because when you look at a typical modbus request, it will look something like this:

00 04 00 00 00 06 01 03 04 8A 00 02

"8A 00" highlighted is the starting address, but if you are not careful and including the byte before, you may actually think it starts with 4xxxxx!

After I sorted that out, it is time to pick the right method to integrate with InterMapper. At first, I thought a simple TCP probe might do the job, but after looking into the manipulation required, a command-line based probe is definitely the way to go.

Since I was doing some pretty low-level network programming, I studied the python socket api at the official python documentation. The write-up and examples there was sufficient for me to get the python integration going. After the python script was completed, the integration with InterMapper was pretty straight forward.

Here is a picture of our fuse box. There are 3 rows in this fuse box. The open section shows a row of fuse for a single phase. There is one more row above and below for the other phases. The air conditioner is a 3-phase unit with a power feed from each phase. I have clamp 3 feeds of the air conditioner to monitor our A/C power usage.

Here is a close up of the current transformer clamp on to a live power feed.

Friday, August 5, 2011

Here is an excellent write-up by William W. Fisher about the behavior of SNMP probes by InterMapper. As we are building our DC Insight solution on top of InterMapper, a thorough understand of InterMapper is very important.
I would like to republish it here for those who are interested to have a deeper understand of how InterMapper behaves.

InterMapper polls devices by sending a request packet and receiving back a response packet. If InterMapper sends a request packet and does not receive a response within the specified timeout (usually 3 seconds), IM counts that packet as 'lost' and retries the request. If InterMapper's request fails to elicit a response three consecutive times, the device's status is set to down. (3 is the default)

With an SNMP probe, the lost packets are SNMP packets. There are three possibilities for where the packet was 'lost':

1. The request didn't reach the target device.
2. The target device did not generate a response within 3 seconds.
3. The response did not make it back to InterMapper.

The SNMP probe is slightly complicated by the fact that the final retry will be a ping packet instead of SNMP. We implemented it this way after finding that some devices do not reliably answer SNMP packets on time. For example, a busy router might leave SNMP packets unanswered, but answer pings immediately. (Responding to a SNMP query is more computationally intensive than answering an ICMP echo). A device that answers the final ping retry is marked as "No SNMP response".

If pings get through fine, but an occasional SNMP packet is lost to one particular device, my sense is that nothing is wrong with the network. I would advise that you increase the threshold for packet loss for that one device to 10% and leave it at that.

Saturday, July 30, 2011

Update 2011-08-09: Turns out it is not possible to externalized it into a separate taglib project. I suspect it is actually reading the MANIFEST.MF from the taglib jar itself rather than the MANIFEST.MF file of the referencing project. The project from bitbucket have been deleted.
------
I have been searching the web to find a taglib which read the version information from the MANIFEST.MF file. Since this is very commonly used, I am surprise I was not able to readily find such a JSP taglib available!

This got to be a common problem since I found plenty of references on how to read version information from the MANIFEST.MF file. For example:

Sunday, June 26, 2011

Over the past couple of weeks, I was working closely with a data center operator to complete a cooling optimization project. The customer deployed our DC Insight solution, which allow us to have a very high-density environment monitoring solution. We have 2 sensors at each rack and 4 at each CRAC, as well as door sensors, dry contacts and fluid tags all deployed with no cabling required.

We have allowed our system to collect the "as-is" data for a week, then we went on optimizing the air-flow distribution of the data center, including installing HotLok blanking panels and KoldLok grommets. We also re-arranged some perforated tiles to allow better air-flow distribution to the server equipment.

We then sit and waited for another week to allow our DC Insight solution to collect data.

Using the before and after data, we compiled a comprehensive report for our customer, confirming some existing suspicions, as well as discovering new interesting trends and potential risks. Bottom line, this is a very data-driven process which allows data center operators to truly understand the environment which houses their mission critical systems.

The time has now come to streamline this experience. This is where DC Insight - Analytic comes in. We are moving towards gathering the grass-root experience, and merge it with the best practice of today, to provide an Analytic platform for our data center users and operators. This is still in alpha stage of development, but anyways, here is a screenshot:

Looks familiar? It is time to bring intuitive user-experience consumer had enjoyed for years to the world of data centers.

DC Insight - Analytic: The platform which brings all these together. The number crunching and processing platform to bring an aggregated view and provide business intelligence to operators, managers and executives of today's data centers.

Monday, June 20, 2011

With a project still underway in Hong Kong, I still end up heading to Beijing for a week. Thanks to my colleagues in Hong Kong for holding the fort, we were able to handle some ad-hoc customer request regarding the project while I was away.

Thanks to my colleagues in Beijing, we were able to meet with 2 System Integrators in Beijing on such short notice. We showcased our Wire-Free RFID environment monitoring solution, soon-to-be re-brand DC Insight. We received some positive response from the System Integrators and they even suggested they would start putting this in as part of the standard data centre design proposal!

I also spend a good amount of time training my technical colleague Mr. Niu on the system. He is a pretty quick learner. By the 3rd day, he has no issue setting up the demo and presenting it to the customer for our second meeting with another SI. Boy, was I relieve as it was very difficult for me to present in my broken mandarin. As China present ample of opportunities, I see a strong need to really pick up mandarin quickly!

In the evening, we spend some time networking and socializing over dinner with some of my dad's friends and former associates in China. Thanks to my dad for arranging these events.

With my broken Canton-Mandarin, I did try to chime in occasionally. In the beginning, I was often met with these confused "what-the-heck-is-he-saying" look. A couple more beers, either I stop caring, or the conversation started to flow a little better. At the end of the day, we exchanged some contacts and email addresses. Hopefully this well lead to some lasting relationships and opportunities for both parties on the motherland.

Monday, June 13, 2011

After adding the component to interface RF Code Zone Manager with a custom InterMapper SNMP probe, I went about load testing it to ensure the stability of the solution. I have created a map of many many probe and set them to poll at 1 second interval.

During the load testing, I observed this steady growth in memory:

Not a good sign.

Good news is the jVisualVM has made finding the memory leak so easy! I recall in the old days memory leak hunting use to be a pain without enterpricy tools such as YourKit.
I don't know when this was added to jVisualVM, but this function, "Find X biggest object by retained size", is tremendously helpful in finding the root cause of the memory leak!

Why is this function helpful? To understand how this function will help determine the root cause from memory leak, refer to this article, "Shallow and retained sizes", by the folks at YourKit.
Now just examine the classes with the largest retained size, and they are probably the classes giving you your memory leak problem!

Monday, May 16, 2011

With the recent success in integrating InterMapper NMS and RFCode temperature and temperature + humidity tags, we took one step further and integrated event driven tags: Dry Contact, Door Tag and Fluid Tags.

Since there is a protocol mis-match between InterMapper and RFCode, to support event-driven tags, we developed a middleware which integrates nicely with RFCode Zone Manager. The middleware acts as a virtual appliance which utilized existing SNMP standards such RFC 4133, RFC 4344 to provide a SNMP interface to the RFCode tags.

The concept is illustrated in the diagram below where in InterMapper, we select the QDS -> "DCI - Truth Value" Probe. We set the "Virtual port" to "1" which is mapped to Tag ID "RCKDOR00001339" in the middleware.

Now our customer will be able to utilized the excellent monitoring interface of InterMapper and combine it with the benefit of RFID wire-free sensors.

Thursday, February 10, 2011

After completing the web-store with PayPal option, we had our first online sales! This is an international sale to Mauritius. Although the sale was small, it does provide an opportunity/excuse to look into international shipping arrangement.

After some back and forth email with the customer and looking into the shipment process, I finally shipped out the product today!

When a chargeback occurs, the above process simply reverses, except the withdrawal of money is done without the seller's consent.

If a fraudulent charge back does occur, it would not be so bad. After all it is for a small amount and it is just old inventory sitting around in our warehouse. This does not mean if it happens, I will ignore it. I will fight it until my last breath. Not because of the money or the "principle", but as an learning opportunity/excuse to explore the options available to fight online fraud from a seller's perspective.

For those who does come across this post, I would love to hear your stories and experience on e-commerce!

Sunday, January 30, 2011

The Wire-Free monitoring solution we put together at QDS has been gaining more interest. We have been busy going around to our partners, showcasing this technology. We believe the combine Wire-Free sensors hardware and the Visual Hierarchical Maps software is a powerful combination which helps facility manager gain insight into their data center environment. A deployment for this solution is also coming up and I have been busy preparing for it. It is a big data center.
Just to share a sense of the scope, it is a using 5 readers and each reader can cover 2,000 - 5,000 sqf of space. At the moment, it is only using small amount of sensors >100, but the great thing about this technology is, you can simply buy more sensors, stick it to the location you want, add it to the software and you are done (provided it is in an area with existing reader coverage). Traditionally, adding sensors to a data center or anywhere is quite a laborious project itself. You will need to plan for network, power, rackspace (rack-mountable appliance base solution), cabling, etc. This hassle is simply not necessary with a wire-free solution anymore.

Our first venture into online purchase on the web is also available. The goal is to handle the consumer traffic and our consumer products. Some of the products we carry are also of interest to consumer. However, they are just looking to buy maybe one or two units, and it does not make much sense for our Account Manager, who's focus is to serve our partners, to spending too much time handling these inquiries.

On the technical side, we are using PayPal to handle all the customer information as well as payment processing. The decision is a good one IMHO. First of, we bypass the whole issue with PCI compliance. We have off-load the payment processing (including credit card) to PayPal. Secondly, we do not store any customer information either. This is also handle on the PayPal side. With this setup, there is absolutely no customer information processed or stored on our side, therefore We do not need to "harden" the website at the moment.