“snap” is a Linux package system which aims to “package any app for every Linux desktop, server, cloud or device, and deliver updates directly”.

What is a snap?

A snap :

is a squashFS filesystem containing your app code and a snap.yaml file containing specific metadata. It has a read-only file-system and, once installed, a writable area. (My comment: sounds like Docker?)

is self-contained. It bundles most of the libraries and runtimes it needs and can be updated and reverted without affecting the rest of the system.

is confined from the OS and other apps through security mechanisms, but can exchange content and functions with other snaps according to fine-grained policies controlled by the user and the OS defaults.

Snapping philosophy

You are new to this world of snaps and want to know what is different from traditional packaging? How to architect your application bundle and iterate from prototype to get your application working? This is the place for you!

Relocatable code!

The main concept to snap your software is that your application needs to be relocatable. It’s a good practice to not rely on hard coded paths like /etc and such, but to read your assets, configuration, hooks from subdirectories of your application. A common way for application is to first read some local directories, via a relative path to your executable, and then fallback to global ones.

This will have the net benefit as well to enable your application working from your development directory, enabling as well testers to only download it from your VCS to test a new feature, without relying on any global installation. The write-debug-fix cycle just became way easier!

Note that you can use environment variables as well to influence the paths your application is looking at. Then, just ship a wrapper script that you point your snap commands to, which is setting those environment variables based on $SNAP, $SNAP_DATA, and $SNAP_USER_DATA for instance to know where to read data and assets from and where to write to.

Ship (and refer!) to your own dependencies

Another important concept of snaps is that you are in control of your dependencies. You won’t have anymore other updates on the system breaking you as long as you ship everything you need, and update them at your own pace.

Generally speaking, your snap will only see libraries and 3rd party dependencies from your snap itself and the core snap. Do not rely on this latter apart from some system library like access to the network, device nodes and such. Ship all your dependencies as part of your snap (and so, don’t hardcode paths to look for them!). Snapcraft helps you by creating a master wrapper script redirecting library loading directory under your snap folder before fallbacking to system one. The wrapper content varies depending on the technology you are using (via overriding PYTHONPATH, LD_LIBRARY_PATH, GEM_HOME, PATH…).

Write data to user path

You can’t write data to every user-writable paths with snap. Your snap-related data are constrained and located in very few of them. This is to enable the rollback mechanism to revert your data alongside the code version itself, ensuring your data are always compatible with your code.

So, use $SNAP_USER_DATA for user data. If you have global configuration that should be readable by multiple users, read and write to $SNAP_DATA. Remember that this last path is only writable by the root user though.

You can have your wrapper script using the same environment variable technique than described on the first stenza. Another strategy is to unconditionally cd to $SNAP_USER_DATA.

Common vs versioned path.

The previously mentioned paths are versioned. It means that for each new update of your snap, the content will be copied to a new directory.

Some data are big assets that don’t really need to be versioned (even if you rollback, there is no configuration or data format that will be specific to one version). For this, you can use $SNAP_USER_COMMON and $SNAP_COMMON, which are similar in permission that their *_DATA counterpart.

However, as their names indicate, any change done there will be common to all versions of your snap (only one instance of the data exists on disk), and such shouldn’t contain any version-specific info. We discourage the use of that directory for configuration (which may change format from one version to the next one), data storage and even database files, if you don’t plan to keep the schema backward-compatible.

The rule of thumb is to think: “if I revert to previous version of snap, will it be able to read all data in COMMON directories?” If the answer is no, move some of those data to the versioned one. Do not worry, we have a great garbage collection keeping only few working versions of your snap!

Always starts developing your snap in devmode

Proper confinement is challenging topic. Adding on that adjustment you have to make to ensure your code is relocatable, while shipping all your dependencies on a read only system is calling for trouble!

This is for all those reasons you should always start developing your snap in devmode: ensure you define confinement: devmode in your snapcraft.yaml stenza and install your package with –devmode. Iterate over it while getting your application working. Once done, open the system logs (/var/log/syslog), and you may see some apparmor and seccomp warnings with ALLOWED tags. Those mean that you have some confinement work to do, by adding the right interfaces plugs to your snap declaration.

You can then try to install your package in non devmode, use the snappy-debug.security command (from snappy-debug package) to confirm that you don’t have any DENIALS left, or get advice on what interfaces you may want to declare. Go through this iterative mode to get your confinemed snap working.

Don’t request too many permissions

Some interfaces autoconnect while the snaps are installed, some don’t. That means that the lesser plugs you are using, the greater chance you have for people to install your snap. For interfaces that don’t autoconnect, try to make them optional and have your code fallback to proper messages explaining why this feature is good to enable them, or why it is required.

]]>2017-09-13T16:30:00+08:00http://gangmax.me/blog/2017/09/13/add-mind-map-in-octopress-postThis post describes how to add “mind map” support in Octopress. The “mind map” content is in “markdown” format as part of an Octopress post, and rendered by the ”KityMinder” JavaScript library.

Preparation

Before creating such a post in Octopress, I need to add something into Octopress to make it support such posts.

1. Add “KityMinder” JavaScript files

The latest offical “KityMinder” files did not work properly. Please download the two files from the URLs above.

The two files do the magic to render the “markdown” format content in an Octopress post into a visual mind map.

2. Update Octopress to enable the JavaScript files

Update the “source/_includes/custom/footer.html” file by adding the two JavaScript files.

source/_includes/custom/footer.html

12345678910111213141516

<!--The following code is used to render the mind map content if there is. The "<pre>"tag which contents the mind map must has "class" attribute with a name startingwith "km-container", then the "<pre>" tag can be renderer properly. Multiple "<pre>"are supported only if they have different name like "km-container1", "km-container2".--><scripttype="text/javascript"src="/javascripts/kity.min.js"></script><scripttype="text/javascript"src="/javascripts/kityminder.core.min.js"></script><scripttype="text/javascript">[].forEach.call(document.querySelectorAll("[class^='km-container']"), function(dom) {
var km = window.km = new kityminder.Minder();
km.setup(dom);
});
</script><!-- The original content of this file: -->

Note that this code snippet has the following two enhancements comparing to the original source:

Use “wildcard element match”(from here and here) to select all the “mind map” parts in the current HTML document to avoid literal one-by-one handling. You only need to name the “mind map” parts starting with “km-container” then they are good to go.

Put the ”var km = window.km = new kityminder.Minder();” line inside the loop to fix the issue that only the last “mind map” part is rendered.

Creating Post

Create a post in Octopress as usual. Add the following “mind map” content.

123456789101112131415161718192021222324

<preclass="km-container"minder-data-type="markdown"style="width: 1000px; height: 500px">- Scrum ceremonies
- 1. Sprint starting: requirement discussion and the poker game
- Confirm all the requirement details
- The product manager prioritize all the requirement items
- Play the poker game to give score to each task
- 2. Daily scrum meeting
- Each scrum member selects task
- Answer the following 3 questions:
- What did I do yesterday?
- What will I do today?
- Any problems I have?
- 3. Sprint ending: retrospection
- What do you think we did not do well in the last sprint?
- What do you think we did well in the last sprint?
- How can we improve?
</pre><preclass="km-container"minder-data-type="markdown"style="width: 150%; height: 400px">- 把大象放进冰箱需要几个步骤?
- 1. 打开冰箱门
- 2. 放入大象
- 3. 关上冰箱门
</pre>

It looks like below:

Mind Map 1

- Scrum ceremonies
- 1. Sprint starting: requirement discussion and the poker game
- Confirm all the requirement details
- The product manager prioritize all the requirement items
- Play the poker game to give score to each task
- 2. Daily scrum meeting
- Each scrum member selects task
- Answer the following 3 questions:
- What did I do yesterday?
- What will I do today?
- Any problems I have?
- 3. Sprint ending: retrospection
- What do you think we did not do well in the last sprint?
- What do you think we did well in the last sprint?
- How can we improve?

Mind Map 2

- 把大象放进冰箱需要几个步骤?
- 1. 打开冰箱门
- 2. 放入大象
- 3. 关上冰箱门

After finishing editing the post, generate the static HTML content as usual. You should see the mind map content as expection.

// From Jake Archibald's Promises and Back:// http://www.html5rocks.com/en/tutorials/es6/promises/#toc-promisifying-xmlhttprequestfunctionget(url){// Return a new promise.returnnewPromise(function(resolve,reject){// Do the usual XHR stuffvarreq=newXMLHttpRequest();req.open('GET',url);req.onload=function(){// This is called even on 404 etc// so check the statusif(req.status==200){// Resolve the promise with the response textresolve(req.response);}else{// Otherwise reject with the status text// which will hopefully be a meaningful errorreject(Error(req.statusText));}};// Handle network errorsreq.onerror=function(){reject(Error("Network Error"));};// Make the requestreq.send();});}// Use it!get('story.json').then(function(response){console.log("Success!",response);},function(error){console.error("Failed!",error);});// orget('story.json').then(function(response){console.log("Success!",response);}).catch(function(error){console.error("Failed!",error);});

Note the following facts:

When creating a “Promise” instance, pass a function as the constructor parameter which has two parameters, each of them is a function: The former(resolve/fulfill) is the one to be called when the async operation succeeds, which has one parameter that is the returned object of the async operation. The latter(reject) is the one to be called when the async operation fails, which has one parameter that is the error object.

The “Promise” instance has the “then()” method, which will be called when the async operation finishes. The “then()” method accepts either two arguments “resolve/fulfill” and “reject”, or one argument “resolve/fulfill”. The “resolve/fulfill” parameter is a function has one parameter which is the returned object of the async operation. The “reject” parameter is a function has one parameter which is the error object.

The “Promise” instance has the “catch()” method, which will be called when the async operation fails. The “catch()” method accepts one argument “reject”, which is a function has one parameter that is the error object.

What is “Promise.all”?

If you trigger multiple async interactions but only want to respond when all of them are completed, that’s where “Promise.all” comes in. The “Promise.all” method takes an array of promises and fires one callback once they are all resolved:

12345678

Promise.all([promise1,promise2]).then(function(results){// Both promises resolved}).catch(function(error){// One or more promises was rejected.// If there are more than one promises are rejected,// only the first error can be caught here.});

An perfect way of thinking about Promise.all is firing off multiple AJAX (via fetch) requests at one time:

123456789

varrequest1=fetch('/users.json');varrequest2=fetch('/articles.json');Promise.all([request1,request2]).then(function(results){// Both promises done!});// From the console:// Catch: Second!

In web development, a polyfill is code that implements a feature on web browsers that do not support the feature. Most often, it refers to a JavaScript library that implements an HTML5 web standard, either an established standard (supported by some browsers) on older browsers, or a proposed standard (not supported by any browsers) on existing browsers. Formally, “a polyfill is a shim for a browser API”.

Polyfills allow web developers to use an API regardless of whether it is supported by a browser or not, and usually with minimal overhead. Typically they first check if a browser supports an API, and use it if available, otherwise using their own implementation. Polyfills themselves use other, more supported features, and thus different polyfills may be needed for different browsers. The term is also used as a verb: polyfilling is providing a polyfill for a feature.

/* * BaseDynamicDataSourceHandlerImpl.java: This class is the base class for * all the handler classes which need the dynamic DataSource support. In each * subclass handling method, it should invoke the "setDataSourceContext(...)" * method to switch the DataSource(by passing the "mallId" argument from * the handling method's arguments). */publicabstractclassBaseDynamicDataSourceHandlerImpl{.../** * Set the DataSource context according to the current mall. * * @param mallId */publicvoidsetDataSourceContext(IntegermallId){...}...}/* * CustomerFlowHandlerImpl.java: A subclass of "BaseDynamicDataSourceHandlerImpl" * class. */@ComponentpublicclassCustomerFlowHandlerImplextendsBaseDynamicDataSourceHandlerImplimplementsCustomerFlowHandler{...@OverridepublicDataResult<String,Integer>getCustomerFlowTrendHistory(IntegermallId,CustomerTypecustomerType,IntegerfloorId,IntegerhistoryDayCount){setDataSourceContext(mallId);...}@OverridepublicDataResult<String,Integer>getCustomerFlowTrendAverage(IntegermallId,CustomerTypecustomerType,IntegerfloorId,IntegerhistoryDayCount){setDataSourceContext(mallId);...}@OverridepublicDataResult<String,Integer>getEnterShopCustomerAmountHistory(IntegermallId,CustomerTypecustomerType,IntegerfloorId,IntegerhistoryDayCount){setDataSourceContext(mallId);...}@OverridepublicDataResult<String,Integer>getEnterShopCustomerAmountAverage(IntegermallId,CustomerTypecustomerType,IntegerfloorId,IntegerhistoryDayCount){setDataSourceContext(mallId);...}...}

According to the “DRY” principle, this should be improved. The following solution is done with Spring AOP. The solution comes from here and here.

/* * DynamicDataSourceAspect.java: The Aspect implementation which defines the pointcut * for each public method in any subclass of "BaseDynamicDataSourceHandlerImpl", and * makes the invocation of the "setDataSourceContext"method. */@Component@AspectpublicclassDynamicDataSourceAspect{@Before("execution(public * com.jcloud.zhike.handler.impl.BaseDynamicDataSourceHandlerImpl+.*(..)) && args(mallId,..)")publicvoidsetDynamicDataSource(JoinPointjoinPoint,IntegermallId){BaseDynamicDataSourceHandlerImpltarget=(BaseDynamicDataSourceHandlerImpl)joinPoint.getTarget();target.setDataSourceContext(mallId);}}/* * CustomerFlowHandlerImpl.java: Now the "setDataSourceContext(...)" invocation lines can be removed. */@ComponentpublicclassCustomerFlowHandlerImplextendsBaseDynamicDataSourceHandlerImplimplementsCustomerFlowHandler{...@OverridepublicDataResult<String,Integer>getCustomerFlowTrendHistory(IntegermallId,CustomerTypecustomerType,IntegerfloorId,IntegerhistoryDayCount){...}@OverridepublicDataResult<String,Integer>getCustomerFlowTrendAverage(IntegermallId,CustomerTypecustomerType,IntegerfloorId,IntegerhistoryDayCount){...}@OverridepublicDataResult<String,Integer>getEnterShopCustomerAmountHistory(IntegermallId,CustomerTypecustomerType,IntegerfloorId,IntegerhistoryDayCount){...}@OverridepublicDataResult<String,Integer>getEnterShopCustomerAmountAverage(IntegermallId,CustomerTypecustomerType,IntegerfloorId,IntegerhistoryDayCount){...}...}

More basic concepts about Spring AOP can be found in the offical document here.

]]>2017-08-30T14:41:00+08:00http://gangmax.me/blog/2017/08/30/why-intellij-idea-is-so-redIt seems every imported classes in the Java project in IntelliJ IDEA cannot be recognized and marked as red. Why?

The solution is here. The reason is that the cache of IntelliJ IDEA is corrupted. You just need to “Click File -> Invalidate Caches and restarting the IDE”.

]]>2017-08-24T15:43:00+08:00http://gangmax.me/blog/2017/08/24/spring-nosuchmethoderror-autoproxyutils-dot-determinetargetclass-errorThe error happens after introducing the “org.springframework.data:spring-data-jpa:jar:1.9.6.RELEASE” dependency when running the web application under Tomcat. The solution comes from ”here”.

2017-08-24 15:00:16,979 DEBUG org.springframework.context.event.EventListenerMethodProcessor.afterSingletonsInstantiated:85 - Could not resolve target class for bean with name 'org.springframework.context.support.PropertySourcesPlaceholderConfigurer#0'java.lang.NoSuchMethodError: org.springframework.aop.framework.autoproxy.AutoProxyUtils.determineTargetClass(Lorg/springframework/beans/factory/config/ConfigurableListableBeanFactory;Ljava/lang/String;)Ljava/lang/Class;
at org.springframework.context.event.EventListenerMethodProcessor.afterSingletonsInstantiated(EventListenerMethodProcessor.java:80) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:792) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:839) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:538) at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:444) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:326) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:107) at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4727) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5189) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:752) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:728) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:734) at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:596) at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1805) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)2017-08-24 15:00:16,982 DEBUG org.springframework.context.event.EventListenerMethodProcessor.afterSingletonsInstantiated:85 - Could not resolve target class for bean with name 'org.springframework.aop.config.internalAutoProxyCreator'java.lang.NoSuchMethodError: org.springframework.aop.framework.autoproxy.AutoProxyUtils.determineTargetClass(Lorg/springframework/beans/factory/config/ConfigurableListableBeanFactory;Ljava/lang/String;)Ljava/lang/Class;
at org.springframework.context.event.EventListenerMethodProcessor.afterSingletonsInstantiated(EventListenerMethodProcessor.java:80) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:792) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:839) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:538) at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:444) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:326) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:107) at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4727) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5189) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:752) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:728) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:734) at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:596) at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1805) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)

The “output” of running “mvn dependency:tree” in the previous project:

Note the “org.springframework:spring-aop:jar:4.1.9.RELEASE:compile” line. Since there is no explicit dependency definition for “spring-aop”. A implicit version(“4.1.9.RELEASE”) is introduced by the “spring-data-jpa” dependency, which cannot work with Spring “4.2.5.RELEASE” version, the one is used in this project.

So the fix is to add the following dependency in the “pom.xml” file:

123456

<!-- "spring.version" defined in this file is "4.2.5.RELEASE". --><dependency><groupId>org.springframework</groupId><artifactId>spring-aop</artifactId><version>${spring.version}</version></dependency>

In this case JDK version “jdk1.8.0_45” is used. And the installation directory is “/export/jdk1.8.0_45”. Using the same directory on each node(master and slaves) makes it easier to set “JAVA_HOME” in the “~/.bashrc” file of each node(master and slaves).

Create directories for “namenode/datanode” on the “master/slave” nodes

12345678910

# 1. Run the following command on the "master" node to create the "namenode" directory.mkdir -p /export/Data/hadoop-2.6.1/hdfs/namenode/
# If the "hadoop" user does not have permission to create this directory, use "root"to do it and run the following command to set the owner of this directory as "hadoop".
chown hadoop:hadoop -R /export/Data/hadoop-2.6.1/
# 2. Run the following command on the "slave" nodes to create the "datanode" directory.mkdir -p /export/Data/hadoop-2.6.1/hdfs/datanode/
# If the "hadoop" user does not have permission to create this directory, use "root"to do it and run the following command to set the owner of this directory as "hadoop".
chown hadoop:hadoop -R /export/Data/hadoop-2.6.1/

]]>2017-08-21T10:56:00+08:00http://gangmax.me/blog/2017/08/21/hyperledger-fabric-notesMy notes about HyperLedger Fabric v1.0 of the development experience during the last 3 months: May 2017 to Aug 2017.

CAs, Orderers and Peers

A HyperLedger Fabric network has the following 3 types of nodes:

CA: used for user authentication.

Orderer: Central node(can be more than one) to verify and order the transaction.

Peer: Normally there are multiple peers in a fabric environment which stores blockchain and handle the requests.

You can see that a HyperLedger fabric network is NOT a p2p network as Bitcoin or Ethereum because of the existence of orderer/CA nodes.

Chaincode

Like ethereum, HyperLedger Fabric supposts running code(“smart contract”) inside it. In HyperLedger Fabric the code is called chaincode. At this moment, HLF(HyperLedger Fabric) supports chaincode written in Golang and Java. Unlike Ethereum, HLF does not have a VM mechanism to execute chaincode. Instead, in HLF the chaincode looks like a embeded program in which you can leverage the libraray/APIs provided by HLF and finally be invoked by HLF. Comparing to Ethereum, it’s simpler but less prepared.

Docker

Each node (CA/Orderer/Peer) is a Docker instance of the offical HyperLedger Fabric image. When executing a user provided chaincode, HLF will start a Docker instance in which the chaincode is executed. HLF highly depends on Docker.

You can run a full HLF network on a computer is it has Docker installed. Each CA/Orderer/Peer node is a Docker instance. By default they use the following ports:

CA service: 7054

Orderer: 7050

Peer: 7051 for peer service, 7053 for event hub service

If all the nodes are on the same Docker host, you can use command like “docker run -p 7151:7051” to forward the request from host port the the Docker instance port.

Relations

A HLF network can has multiple orderers.

A HLF network can has multiple peers.

A HLF network can has multiple organizations, each organization can have multiple users. Two types of users: admin user and normal user.

A peer belongs to a specific organization. One Organization can has one or more peers.

A channel is a combination of multiple peers from different organizations. The organzations of a channel share blockchain data. If you do so, the channel is built for the involved organizations and they share information of this channel.

Configuration

Use the “cryptogen” command to generate the cryptographic artifacts based on the given “crypto-config.yaml” file.

Use the “configtxgen” command to generate the basic files to start a HLF network like “genesis.block/channel.tx/Org1MSPanchors.tx/Org2MSPanchors.tx”(depends on the organizations you created).

SDK

When a HLF network is ready, you can either use the command line tools provided by HLF or a program using HLF SDK to execute operations on it. Both the command line tools and the programs using HLF SDK, essentially they send GRPC requests to the CA/Orderer/Peer nodes to operate. The nodes in a HLF network use GRPC to communicate.

Currently there are 3 offical HLF SDKs devided by the implementing language: NodeJS, Java and Python. At this moment(Aug 2017) the degree of maturity is NodeJS > Java > Python.

In our project we encapsulate our business logic into a Java web application and make it a Docker image:

It provides REST APIs for outside world to execute operations on the HLF blockchain, such as invoking chainchain, query blockchain transaction/block information.

It uses SDK provided by HLF(in our case it’s Java SDK) to perform the operations under the hood.

The Java web application(war) is packaged with Tomcat in the Docker image and is ready to accept HTTP REST requests after the Docker instance is started.

Process

Here is the process to make the whole thing work.

Use “cryptogen” and “configtxgen” to gegerate files which are needed to start a HLF network. And you have to define the organizations/users/CAs/Orderers/Peers information in this step.

Setup the the HLF network and start CA/Orderer/Peer the nodes.

Create channel.

Make the selected peers join the channel.

Install chaincode on the peers.

Instantiate the chaincode on the peers.

Start the API web application in the Docker images(note the configuration of the web app should point to the correct CA/Orderer/Peer nodes).

Not you can send request to the API web application to perform operations on the blockchain.

Reference

]]>2017-08-10T20:24:00+08:00http://gangmax.me/blog/2017/08/10/mybatis-invalid-bound-statement-issue-caused-by-spring-loading-sequenceThis was a really weird issue and took me some time to fix.

Background

This is a web application using Spring + MyBatis.

The basic mapper xml files/Java classes are generated using Maven. It worked well.

I added some customized content into the existing mapper xml files/Java classes. It worked well.

I was told that the customized content should not be added the generated mapper xml files/Java classes. Instead, I should create new mapper xml files to contain the DB operation content, and create new Java interface classes which extend the existing generated ones and contain the new added methods. Then I can use the new Java interface classes directly in my code. This is a valid suggestion and I did so.

But the following error happened when Spring starts:

1234567891011121314151617

Caused by: org.apache.ibatis.binding.BindingException: Invalid bound statement (not found): com.jcloud.blockchain.orm.mapper.EnhancedChannelMapper.selectByExample
at org.apache.ibatis.binding.MapperMethod$SqlCommand.<init>(MapperMethod.java:223) at org.apache.ibatis.binding.MapperMethod.<init>(MapperMethod.java:48) at org.apache.ibatis.binding.MapperProxy.cachedMapperMethod(MapperProxy.java:59) at org.apache.ibatis.binding.MapperProxy.invoke(MapperProxy.java:52) at com.sun.proxy.$Proxy41.selectByExample(Unknown Source) at a.b.c.impl.FabricConfigDBHelper.getChannels(FabricConfigDBHelper.java:66) at a.b.c.impl.FabricConfigDBImpl.initChannels(FabricConfigDBImpl.java:82) at a.b.c.impl.FabricConfigDBImpl.initialize(FabricConfigDBImpl.java:54) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor$LifecycleElement.invoke(InitDestroyAnnotationBeanPostProcessor.java:354) at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor$LifecycleMetadata.invokeInitMethods(InitDestroyAnnotationBeanPostProcessor.java:305) at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor.postProcessBeforeInitialization(InitDestroyAnnotationBeanPostProcessor.java:133) ... 41 more

Solution

First, I confirm the mapper xml files/Java classes I created are all correct. Then I find if I add the autowired mapper instance in code directly, it also works:

Note the “@PostConstruct” annotation. This class needs some initialization after Spring autowiring the dependency. And the “FabricConfigDBHelper” instance is used in the initialization phase where the exception happens. So I doubt the problem is: when the initialization of FabricConfigDBImpl happens, the mapper classes of MyBatis is not fully ready. Instead, it only finishes the basic mapper classes without parsing the parent mapper classes if there is any. So only the methods defined in the child mapper interface class can be recognized but the ones defined in the parent mapper interface cannot. To fix it, lazy initialization should be used: only when the “FabricConfigDBImpl” instance is about to be used, the initialization should be done.

# Assume your secondary disk is "/dev/sdb" and has only one partation.sudo blkid /dev/sdb1

Add the following line into the “/etc/fstab” file:

123

# Assume mount the partation to the "/home/user/mount" directory.UUID=8eec26f6-7fea-46d6-b385-f5ba13c24f5e /home/user/mount ext4 defaults 0 2
# About the meaning of the columns you can read the reference links.

@RulepublicExpectedExceptionexpectedEx=ExpectedException.none();@TestpublicvoidshouldThrowRuntimeExceptionWhenEmployeeIDisNull()throwsException{expectedEx.expect(RuntimeException.class);expectedEx.expectMessage("Employee ID is null");// do something that should throw the exception...}

<projectxmlns="http://maven.apache.org/POM/4.0.0"xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"><modelVersion>4.0.0</modelVersion><groupId>org.hyperledger.fabric-sdk-java</groupId><artifactId>fabric-sdk-java</artifactId><packaging>jar</packaging><version>1.0.0</version><name>fabric-java-sdk</name><description>Java SDK for Hyperledger fabric project</description><url>https://www.hyperledger.org/community/projects</url><licenses><license><name>Apache License, Version 2.0</name><url>http://www.apache.org/licenses/LICENSE-2.0.txt</url><distribution>repo</distribution></license></licenses><developers><developer><name>Fabric JAVA SDK Developers</name><email>hyperledger-technical-discuss@lists.hyperledger.org</email></developer></developers><scm><connection>scm:git:git://github.com/hyperledger/fabric-sdk-java.git</connection><developerConnection>scm:git:ssh://github.com/hyperledger/fabric-sdk-java.git</developerConnection><url>http://github.com/hyperledger/fabric-sdk-java</url><tag>fabric-sdk-java-1.0</tag></scm><properties><grpc.version>1.3.0</grpc.version><!-- CURRENT_GRPC_VERSION --><bouncycastle.version>1.55</bouncycastle.version><httpclient.version>4.5.2</httpclient.version><skipITs>true</skipITs><alpn-boot-version>8.1.7.v20160121</alpn-boot-version><project.build.sourceEncoding>UTF-8</project.build.sourceEncoding><jacoco.version>0.7.9</jacoco.version></properties><reporting><plugins><plugin><groupId>org.apache.maven.plugins</groupId><artifactId>maven-checkstyle-plugin</artifactId><version>2.17</version><reportSets><reportSet><reports><report>checkstyle</report></reports></reportSet></reportSets></plugin><plugin><groupId>org.jacoco</groupId><artifactId>jacoco-maven-plugin</artifactId><version>${jacoco.version}</version></plugin></plugins></reporting><dependencies><dependency><groupId>junit</groupId><artifactId>junit</artifactId><version>4.12</version><scope>test</scope></dependency><dependency><groupId>io.grpc</groupId><artifactId>grpc-netty</artifactId><version>${grpc.version}</version></dependency><dependency><groupId>io.grpc</groupId><artifactId>grpc-protobuf</artifactId><version>${grpc.version}</version></dependency><dependency><groupId>io.grpc</groupId><artifactId>grpc-stub</artifactId><version>${grpc.version}</version></dependency><dependency><groupId>io.netty</groupId><artifactId>netty-tcnative-boringssl-static</artifactId><version>1.1.33.Fork26</version></dependency><dependency><groupId>io.netty</groupId><artifactId>netty-codec-http2</artifactId><version>4.1.8.Final</version></dependency><!-- https://mvnrepository.com/artifact/com.google.protobuf/protobuf-java --><dependency><groupId>com.google.protobuf</groupId><artifactId>protobuf-java</artifactId><version>3.1.0</version></dependency><!-- https://mvnrepository.com/artifact/org.bouncycastle/bcpkix-jdk15on --><dependency><groupId>org.bouncycastle</groupId><artifactId>bcpkix-jdk15on</artifactId><version>${bouncycastle.version}</version></dependency><dependency><groupId>commons-logging</groupId><artifactId>commons-logging</artifactId><version>1.2</version></dependency><dependency><groupId>commons-cli</groupId><artifactId>commons-cli</artifactId><version>1.3.1</version></dependency><dependency><groupId>org.apache.commons</groupId><artifactId>commons-compress</artifactId><version>1.12</version></dependency><dependency><groupId>commons-io</groupId><artifactId>commons-io</artifactId><version>2.4</version></dependency><!-- https://mvnrepository.com/artifact/log4j/log4j --><!--- <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-api</artifactId> <version>2.6.2</version> </dependency> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-core</artifactId> <version>2.6.2</version> </dependency> --><dependency><groupId>log4j</groupId><artifactId>log4j</artifactId><version>1.2.17</version></dependency><!-- https://mvnrepository.com/artifact/org.apache.httpcomponents/httpclient --><dependency><groupId>org.apache.httpcomponents</groupId><artifactId>httpclient</artifactId><version>${httpclient.version}</version></dependency><!-- https://mvnrepository.com/artifact/org.glassfish/javax.json --><dependency><groupId>org.glassfish</groupId><artifactId>javax.json</artifactId><version>1.0.4</version></dependency><!--&lt;!&ndash; https://mvnrepository.com/artifact/org.mortbay.jetty.alpn/jetty-alpn-agent &ndash;&gt;--><!--<dependency>--><!--<groupId>org.mortbay.jetty.alpn</groupId>--><!--<artifactId>jetty-alpn-agent</artifactId>--><!--<version>2.0.1</version>--><!--</dependency>--><!-- https://mvnrepository.com/artifact/org.mortbay.jetty.alpn/alpn-boot --><!--<dependency>--><!--<groupId>org.mortbay.jetty.alpn</groupId>--><!--<artifactId>alpn-boot</artifactId>--><!--<version>${alpn-boot-version}</version>--><!--</dependency>--><!-- https://mvnrepository.com/artifact/org.yaml/snakeyaml --><dependency><groupId>org.yaml</groupId><artifactId>snakeyaml</artifactId><version>1.18</version></dependency><!-- https://mvnrepository.com/artifact/org.jacoco/jacoco-maven-plugin --><dependency><groupId>org.jacoco</groupId><artifactId>jacoco-maven-plugin</artifactId><version>${jacoco.version}</version></dependency></dependencies><build><resources><resource><filtering>false</filtering><directory>src</directory><includes><include>**/*.properties</include><include>**/*.Docker</include></includes></resource></resources><extensions><extension><groupId>kr.motd.maven</groupId><artifactId>os-maven-plugin</artifactId><version>1.4.1.Final</version></extension></extensions><plugins><plugin><groupId>org.apache.maven.plugins</groupId><artifactId>maven-surefire-plugin</artifactId><version>2.19.1</version><configuration><argLine>${surefireArgLine}</argLine><includes><include>**/*Test.java</include></includes><!--<useSystemClassLoader>true</useSystemClassLoader>--></configuration></plugin><plugin><groupId>org.apache.maven.plugins</groupId><artifactId>maven-failsafe-plugin</artifactId><version>2.19.1</version><configuration><argLine>${failsafeArgLine}</argLine><includes><include>**/IntegrationSuite.java</include></includes><skipITs>${skipITs}</skipITs><!--<argLine>--><!-- -Xbootclasspath/p:${settings.localRepository}/org/mortbay/jetty/alpn/alpn-boot/${alpn-boot-version}/alpn-boot-${alpn-boot-version}.jar--><!--</argLine>--></configuration><executions><execution><id>failsafe-integration-tests</id><phase>integration-test</phase><goals><goal>integration-test</goal><goal>verify</goal></goals></execution></executions></plugin><plugin><groupId>org.xolstice.maven.plugins</groupId><artifactId>protobuf-maven-plugin</artifactId><version>0.5.0</version><configuration><!-- The version of protoc must match protobuf-java. If you don't depend on protobuf-java directly, you will be transitively depending on the protobuf-java version that grpc depends on. --><protocArtifact>com.google.protobuf:protoc:3.0.0:exe:${os.detected.classifier}</protocArtifact><pluginId>grpc-java</pluginId><pluginArtifact>io.grpc:protoc-gen-grpc-java:${grpc.version}:exe:${os.detected.classifier}
</pluginArtifact></configuration><executions><execution><goals><goal>compile</goal><goal>compile-custom</goal></goals></execution></executions></plugin><plugin><groupId>org.apache.maven.plugins</groupId><artifactId>maven-compiler-plugin</artifactId><version>3.3</version><configuration><source>1.8</source><target>1.8</target></configuration></plugin><plugin><groupId>org.apache.maven.plugins</groupId><artifactId>maven-javadoc-plugin</artifactId><version>2.10.4</version><configuration><excludePackageNames> org.hyperledger.fabric_ca.sdk.helper:org.hyperledger.fabric.protos.*:org.hyperledger.fabric.sdk.helper:org.hyperledger.fabric.sdk.transaction:org.hyperledger.fabric.sdk.security
</excludePackageNames><show>public</show><doctitle>Hyperledger Fabric Java SDK</doctitle><nohelp>true</nohelp></configuration></plugin><plugin><groupId>org.apache.maven.plugins</groupId><artifactId>maven-checkstyle-plugin</artifactId><version>2.17</version><executions><execution><goals><goal>check</goal></goals></execution></executions><configuration><consoleOutput>true</consoleOutput><logViolationsToConsole>true</logViolationsToConsole><failOnViolation>true</failOnViolation><failsOnError>true</failsOnError><sourceDirectory>${project.build.sourceDirectory}</sourceDirectory><configLocation>checkstyle-config.xml</configLocation><includeTestSourceDirectory>true</includeTestSourceDirectory></configuration></plugin><plugin><groupId>org.jacoco</groupId><artifactId>jacoco-maven-plugin</artifactId><version>${jacoco.version}</version><configuration><excludes><exclude>**/org/hyperledger/fabric/protos/**</exclude></excludes></configuration><executions><execution><id>default-prepare-agent</id><goals><goal>prepare-agent</goal></goals></execution><execution><id>default-report</id><phase>prepare-package</phase><goals><goal>report</goal></goals></execution><!-- Prepares the property pointing to the JaCoCo runtime agent which is passed as VM argument when Maven the Surefire plugin is executed. --><execution><id>pre-unit-test</id><goals><goal>prepare-agent</goal></goals><configuration><propertyName>surefireArgLine</propertyName><!-- Sets the path to the file which contains the execution data. --><destFile>${project.build.directory}/coverage-reports/jacoco-ut.exec</destFile><!-- Sets the name of the property containing the settings for JaCoCo runtime agent. --></configuration></execution><!-- Ensures that the code coverage report for unit tests is created after unit tests have been run. --><execution><id>post-unit-test</id><phase>test</phase><goals><goal>report</goal></goals><configuration><!-- Sets the path to the file which contains the execution data. --><dataFile>${project.build.directory}/coverage-reports/jacoco-ut.exec</dataFile><!-- Sets the output directory for the code coverage report. --><outputDirectory>${project.reporting.outputDirectory}/jacoco-ut</outputDirectory></configuration></execution><!-- The Executions required by unit tests are omitted. --><!-- Prepares the property pointing to the JaCoCo runtime agent which is passed as VM argument when Maven the Failsafe plugin is executed. --><execution><id>pre-integration-test</id><phase>pre-integration-test</phase><goals><goal>prepare-agent</goal></goals><configuration><!-- Sets the path to the file which contains the execution data. --><destFile>${project.build.directory}/coverage-reports/jacoco-it.exec</destFile><!-- Sets the name of the property containing the settings for JaCoCo runtime agent. --><propertyName>failsafeArgLine</propertyName></configuration></execution><!-- Ensures that the code coverage report for integration tests after integration tests have been run. --><execution><id>post-integration-test</id><phase>post-integration-test</phase><goals><goal>report</goal></goals><configuration><!-- Sets the path to the file which contains the execution data. --><dataFile>${project.build.directory}/coverage-reports/jacoco-it.exec</dataFile><!-- Sets the output directory for the code coverage report. --><outputDirectory>${project.reporting.outputDirectory}/jacoco-it</outputDirectory></configuration></execution><execution><id>merge-results</id><phase>verify</phase><goals><goal>merge</goal></goals><configuration><fileSets><!-- Implementation attribute not needed in Maven 3 --><!--<fileSet implementation="org.apache.maven.shared.model.fileset.FileSet">--><fileSet><directory>${project.build.directory}/coverage-reports</directory><includes><include>*.exec</include></includes></fileSet></fileSets><!-- File containing the merged data --><destFile>${project.build.directory}/jacoco-merged/merged.exec</destFile></configuration></execution><execution><id>post-merge-report</id><phase>verify</phase><goals><goal>report</goal></goals><configuration><dataFile>${project.build.directory}/jacoco-merged/merged.exec</dataFile><outputDirectory>${project.reporting.outputDirectory}/jacoco-aggregate</outputDirectory></configuration></execution></executions></plugin></plugins></build><distributionManagement><snapshotRepository><id>ossrh</id><url>https://oss.sonatype.org/content/repositories/snapshots</url></snapshotRepository></distributionManagement><profiles><profile><id>release</id><build><plugins><plugin><groupId>org.sonatype.plugins</groupId><artifactId>nexus-staging-maven-plugin</artifactId><version>1.6.7</version><extensions>true</extensions><configuration><serverId>ossrh</serverId><nexusUrl>https://oss.sonatype.org/</nexusUrl><autoReleaseAfterClose>false</autoReleaseAfterClose></configuration></plugin><plugin><artifactId>maven-assembly-plugin</artifactId><version>2.3</version><configuration><descriptorRefs><descriptorRef>jar-with-dependencies</descriptorRef></descriptorRefs></configuration><executions><execution><phase>package</phase><goals><goal>single</goal></goals></execution></executions></plugin><plugin><groupId>org.apache.maven.plugins</groupId><artifactId>maven-gpg-plugin</artifactId><configuration><useAgent>true</useAgent></configuration><version>1.5</version><executions><execution><id>sign-artifacts</id><phase>verify</phase><goals><goal>sign</goal></goals></execution></executions></plugin><plugin><groupId>org.apache.maven.plugins</groupId><artifactId>maven-source-plugin</artifactId><version>2.2.1</version><executions><execution><id>attach-sources</id><goals><goal>jar-no-fork</goal></goals></execution></executions></plugin><plugin><groupId>org.apache.maven.plugins</groupId><artifactId>maven-javadoc-plugin</artifactId><version>2.10.4</version><configuration><excludePackageNames> org.hyperledger.fabric_ca.sdk.helper:org.hyperledger.fabric.protos.*:org.hyperledger.fabric.sdk.helper:org.hyperledger.fabric.sdk.transaction:org.hyperledger.fabric.sdk.security
</excludePackageNames><show>public</show><doctitle>Hyperledger Fabric Java SDK</doctitle><nohelp>true</nohelp></configuration><executions><execution><id>attach-javadocs</id><goals><goal>jar</goal></goals></execution></executions></plugin></plugins></build></profile></profiles></project>

The Node SDK supports Node versions “v6.2.0 - 6.10.0”(Node v7+ is not supported), and I was using node v7.4.0. After I switch node version from v7.4.0 to v6.9.4, the error above is reported. The reason is that: I ran the “npm install” command under Node v7.4.0, whch compiled some modules with Node v7.4.0, which cannnot run under Node v6.9.4. So the fix is just: remove the “node_modules” directory and run the “npm install” command again under Node v6.9.4 again. The will make the node modules be compiled under Node v6.9.4. And the problem is fixed.

{"name":"test-node-sdk","version":"0.0.1","description":"Demo how to use the Hyperledger Fabric node SDK","main":"main.js","author":"Max Huang","dependencies":{"fabric-client":"1.0.0-rc1","fabric-ca-client":"1.0.0-rc1"}}

I got the problem that I found I could not ping the IP address “172.17.54.104” within my Linux VM. The IP address is an internal server. Then I found the root cause is that, Docker is installed on my Linux VM which by default create a “docker0” bridge with the IP range “172.17.x.x”. So an IP routing rule is created for the “172.17.x.x” IP range which overrides the corrrect IP I want to access.

A workaround is to execute the following command to remove the IP routing rule: