You never need to use new Object() in JavaScript. Use the object literal {} instead. Similarly, don’t use new Array(), use the array literal [] instead. Arrays in JavaScript work nothing like the arrays in Java, and use of the Java-like syntax will confuse you. LINK

If you use a named index, JavaScript will redefine the array to a standard object. After that, all array methods and properties will produce incorrect results… In JavaScript, arrays always use numbered indexes. LINK

Reference

]]>
<p>一门语言的执行，大致经历下面这些过程：词法分析 – 语法分析 – 语义分析 – 中间代码生成 – 优化代码 – 代码生成。在Javascript中，Syntax Parser的作用是进行<strong>词法分析</strong>和<strong>语法分析</strong>。</p>
<blockquote>
<p>A program that reads your code and determines what it does and if its grammar is valid.</p>
</blockquote>
<p>词法分析挨个字符地扫描代码，把关键token识别出来。语法分析利用词法分析的结果建立上下文关系<strong>语法树</strong> Abstract Syntax Tree (AST)。一般情况下，我们不会直接和语法树打交道，但会在进行Uglify代码压缩、IDE语法高亮、Babel重编译、关键字匹配和作用域判断时候涉及到。</p>
JS学习 Array入门http://hackjutsu.com/2016/10/15/JS学习 Array 入门/2016-10-16T01:00:45.000Z2016-11-19T01:31:52.000ZJavaScript arrays are used to store multiple values in a single variable. The topics covered are summerized below.

Creating an Array

How to Recognize an Array

Array Properties and Methods

pop & push

shift & unshift

splice/join/delete/find/slice

Disclaimer: This is my note for Javascript study where part of the content is copied from other sources. Please go to the Reference part to see the original posts.

Array is an Object

Arrays are a special type of objects. The typeof operator in JavaScript returns “object” for arrays.But, JavaScript arrays are best described as arrays. Arrays use numbers to access its “elements”, rather than a self-defined named key.

Creating an Array

1

2

3

4

5

// method1

var cars = ["Saab", "Volvo", "BMW"];

// method2, exactly the same but complicated, should avoid

var cars = newArray("Saab", "Volvo", "BMW");

Never put a comma after the last element (like “BMW”,).The effect is inconsistent across browsers.

Array Properties and Methods

1

2

3

var x = cars.length; // The length property returns the number of elements

var y = cars.sort(); // The sort() method sorts arrays

var z = cars.reverse(); // The reverse() method reverts arrays

Popping and Pushing

The pop() method removes the last element from an array and returns the value that was popped.

1

2

var fruits = ["Banana", "Orange", "Apple", "Mango"];

var x = fruits.pop(); // the value of x is "Mango"

The push() method adds a new element to an array (at the end), and returns the new array length.

1

2

var fruits = ["Banana", "Orange", "Apple", "Mango"];

var x = fruits.push("Kiwi"); // the value of x is 5

Shifting and Unshifting

Shifting is equivalent to popping, working on the first element instead of the last.The shift() method removes the first array element and “shifts” all other elements to a lower index, and return the item that is shifted out.

1

2

var fruits = ["Banana", "Orange", "Apple", "Mango"];

fruits.shift(); // Removes the first element "Banana" from fruits

The unshift() method adds a new element to an array (at the beginning), and “unshifts” older elements, and return the new length.

1

2

var fruits = ["Banana", "Orange", "Apple", "Mango"];

fruits.unshift("Lemon"); // Adds a new element "Lemon" to fruits

Splicing an Array

splice() can be used to add new elements to Array.

1

2

3

var fruits = ["Banana", "Orange", "Apple", "Mango"];

fruits.splice(2, 0, "Lemon", "Kiwi");

// [ 'Banana', 'Orange', 'Lemon', 'Kiwi', 'Apple', 'Mango' ]

The first parameter (2) defines the position where new elements should be added (spliced in).The second parameter (0) defines how many elements should be removed.The rest of the parameters (“Lemon” , “Kiwi”) define the new elements to be added.

Deleting Elements

Using the JavaScript operator delete

Using delete may leave undefined holes in the array. Use pop() or shift() instead.

Using splice to delete an element without holes.

1

2

var fruits = ["Banana", "Orange", "Apple", "Mango"];

fruits.splice(0, 1); // Removes the first element of fruits

The first parameter (0) defines the position where new elements should be added (spliced in).The second parameter (1) defines how many elements should be removed.The rest of the parameters are omitted. No new elements will be added.

Using filter to delete an element without holes.

1

2

3

4

5

6

var fruits = ["Banana", "Orange", "Apple", "Mango"];

var toDelete = "Apple";

fruits = fruits.filter(function(value) {

return value != toDelete;

});

// [ 'Banana', 'Orange', 'Mango' ]

Finding an Element

The find() method returns the value of the first element in an array that pass a test (provided as a function).

1

2

3

4

5

6

7

8

9

var ages = [3, 10, 18, 20];

functioncheckAdult(age) {

return age >= 18;

}

functionmyFunction() {

document.getElementById("demo").innerHTML = ages.find(checkAdult);

}

Slicing an Array

The slice() method slices out a piece of an array into a new array. This example slices out a part of an array starting from array element 1 (“Orange”):

1

2

3

var fruits = ["Banana", "Orange", "Lemon", "Apple", "Mango"];

var citrus = fruits.slice(1);

// Orange,Lemon,Apple,Mango

The slice() method can take two arguments like slice(1,3).The method then selects elements from the start argument, and up to (but not including) the end argument.

1

2

var fruits = ["Banana", "Orange", "Lemon", "Apple", "Mango"];

var citrus = fruits.slice(1, 3);

If the end argument is omitted, like in the first examples, the slice() method slices out the rest of the array.

How to Recognize an Array

Solution 1 (ES5)

1

Array.isArray(fruits); // returns true

Solution 2 create your own isArray function.

1

2

3

functionisArray(x) {

return x.constructor.toString().indexOf("Array") > -1;

}

Solution 3 use instanceof operator

1

fruits instanceofArray// returns true

Reference

]]>
<p>JavaScript arrays are used to store multiple values in a single variable. The topics covered are summerized below.</p>
<ul>
<li>Creating an Array</li>
<li>How to Recognize an Array</li>
<li>Array Properties and Methods<ul>
<li>pop &amp; push</li>
<li>shift &amp; unshift</li>
<li>splice/join/delete/find/slice</li>
</ul>
</li>
</ul>
Cool URIs don't changehttp://hackjutsu.com/2016/10/14/Cool URIs don’t change/2016-10-15T01:00:01.000Z2016-10-14T22:46:29.000ZThis is a repost from here for my reference. Please go to the original post for the most up-to-date information.

What makes a cool URI?A cool URI is one which does not change.What sorts of URI change?URIs don’t change: people change them.

There are no reasons at all in theory for people to change URIs (or stop maintaining documents), but millions of reasons in practice.

In theory, the domain name space owner owns the domain name space and therefore all URIs in it. Except insolvency, nothing prevents the domain name owner from keeping the name. And in theory the URI space under your domain name is totally under your control, so you can make it as stable as you like. Pretty much the only good reason for a document to disappear from the Web is that the company which owned the domain name went out of business or can no longer afford to keep the server running. Then why are there so many dangling links in the world? Part of it is just lack of forethought. Here are some reasons you hear out there:

We just reorganized our website to make it better.

Do you really feel that the old URIs cannot be kept running? If so, you chose them very badly. Think of your new ones so that you will be able to keep then running after the next redesign.

We have so much material that we can’t keep track of what is out of date and what is confidential and what is valid and so we thought we’d better just turn the whole lot off.

That I can sympathize with - the W3C went through a period like that, when we had to carefully sift archival material for confidentiality before making the archives public. The solution is forethought - make sure you capture with every document its acceptable distribution, its creation date and ideally its expiry date. Keep this metadata.

Well, we found we had to move the files…

This is one of the lamest excuses. A lot of people don’t know that servers such as Apache give you a lot of control over a flexible relationship between the URI of an object and where a file which represents it actually is in a file system. Think of the URI space as an abstract space, perfectly organized. Then, make a mapping onto whatever reality you actually use to implement it. Then, tell your server. You can even write bits of your server to make it just right.

John doesn’t maintain that file any more, Jane does.

Whatever was that URI doing with John’s name in it? It was in his directory? I see.

We used to use a cgi script for this and now we use a binary program.

There is a crazy notion that pages produced by scripts have to be located in a “cgibin” or “cgi” area. This is exposing the mechanism of how you run your server. You change the mechanism (even keeping the content the same ) and whoops - all your URIs change.

the main page for starting to look for documents, is clearly not going to be something to trust to being there in a few years. “cgi-bin” and “oldbrowse” and “.pl” all point to bits of how-we-do-it-now. By contrast, if you use the page to find a document, you get first an equally bad

Looking at this one, the “pubs/1998” header is going to give any future archive service a good clue that the old 1998 document classification scheme is in progress. Though in 2098 the document numbers might look different, I can imagine this URI still being valid, and the NSF or whatever carries on the archive not being at all embarrassed about it.

I didn’t think URLs have to be persistent - that was URNs.

This is the probably one of the worst side-effects of the URN discussions. Some seem to think that because there is research about namespaces which will be more persistent, that they can be as lax about dangling links as they like as “URNs will fix all that”. If you are one of these folks, then allow me to disillusion you.

Most URN schemes I have seen look something like an authority ID followed by either a date and a string you choose, or just a string you choose. This looks very like an HTTP URI. In other words, if you think your organization will be capable of creating URNs which will last, then prove it by doing it now and using them for your HTTP URIs. There is nothing about HTTP which makes your URIs unstable. It is your organization. Make a database which maps document URN to current filename, and let the web server use that to actually retrieve files.

If you have gotten to this point, then unless you have the time and money and contacts to get some software design done, then you might claim the next excuse:

We would like to, but we just don’t have the right tools.

Now here is one I can sympathize with. I agree entirely. What you need to do is to have the web server look up a persistent URI in an instant and return the file, wherever your current crazy file system has it stored away at the moment. You would like to be able to store the URI in the file as a check, and constantly keep the database in tune with actuality. You’d like to store the relationships between different versions and translations of the same document, and you’d like to keep an independent record of the checksum to provide a guard against file corruption by accidental error. And web servers just don’t come out of the box with these features. When you want to create a new document, your editor asks you for a URI instead of telling you.

You need to be able to change things like ownership, access, archive level security level, and so on, of a document in the URI space without changing the URI.

Too bad. But we’ll get there. At W3C we use Jigedit functionality (Jigsaw server used for editing) which does track versions, and we are experimenting with document creation scripts. If you make tools, servers and clients, take note!

This is an outstanding reason, which applies for example to many W3C pages including this one: so do what I say, not what I do.

Why should I care?

When you change a URI on your server, you can never completely tell who will have links to the old URI. They might have made links from regular web pages. They might have bookmarked your page. They might have scrawled the URI in the margin of a letter to a friend.

When someone follows a link and it breaks, they generally lose confidence in the owner of the server. They also are frustrated - emotionally and practically from accomplishing their goal.

Enough people complain all the time about dangling links that I hope the damage is obvious. I hope it also obvious that the reputation damage is to the maintainer of the server whose document vanished.

So what should I do? Designing URIs

It is the the duty of a Webmaster to allocate URIs which you will be able to stand by in 2 years, in 20 years, in 200 years. This needs thought, and organization, and commitment.

URIs change when there is some information in them which changes. It is critical how you design them. (What, design a URI? I have to design URIs? Yes, you have to think about it.). Designing mostly means leaving information out.

The creation date of the document - the date the URI is issued - is one thing which will not change. It is very useful for separating requests which use a new system from those which use an old system. That is one thing with which it is good to start a URI. If a document is in any way dated, even though it will be of interest for generations, then the date is a good starter.

The only exception is a page which is deliberately a “latest” page for, for example, the whole organization or a large part of it.

is the latest “Money daily” column in “Money” magazine. The main reason for not needing the date in this URI is that there is no reason for the persistence of the URI to outlast the magazine. The concept of “today’s Money” vanishes if Money goes out of production. If you want to link to the content, you would link to it where it appears separately in the archives as

(Looks good. Assumes that “money” will mean the same thing throughout the life of pathfinder.com. There is a duplication of “98” and an “.html” you don’t need but otherwise this looks like a strong URI).

What to leave out

Everything! After the creation date, putting any information in the name is asking for trouble one way or another.

Authors name- authorship can change with new versions. People quit organizations and hand things on.

Subject. This is tricky. It always looks good at the time but changes surprisingly fast. I discuss this more below.

Status- directories like “old” and “draft” and so on, not to mention “latest” and “cool” appear all over file systems. Documents change status - or there would be no point in producing drafts. The latest version of a document needs a persistent identifier whatever its status is. Keep the status out of the name.

Access. At W3C we divide the site into “Team access”, “Member access” and “Public access”. It sounds good, but of course documents start off as team ideas, are discussed with members, and then go public. A shame indeed if every time some document is opened to wider discussion all the old links to it fail! We are switching to a simple date code now.

File name extension. This is a very common one. “cgi”, even “.html” is something which will change. You may not be using HTML for that page in 20 years time, but you might want today’s links to it to still be valid. The canonical way of making links to the W3C site doesn’t use the extension.(how?)

Software mechanisms. Look for “cgi”, “exec” and other give-away “look what software we are using” bits in URIs. Anyone want to commit to using perl cgi scripts all their lives? Nope? Cut out the .pl. Read the server manual on how to do it.

Topics and Classification by subject

I’ll go into this danger in more detail as it is one of the more difficult things to avoid. Typically, topics end up in URIs when you classify your documents according to a breakdown of the work you are doing. That breakdown will change. Names for areas will change. At W3C we wanted to change “MarkUp” to “Markup” and then to “HTML” to reflect the actual content of the section. Also, beware that this is often a flat name space. In 100 years are you sure you won’t want to reuse anything? We wanted to reuse “History” and “Stylesheets” for example in our short life.

This is a tempting way of organizing a web site - and indeed a tempting way of organizing anything, including the whole web. It is a great medium term solution but has serious drawbacks in the long term

Part of the reasons for this lie in the philosophy of meaning. every term in the language it a potential clustering subject, and each person can have a different idea of what it means. Because the relationships between subjects are web-like rather than tree-like, even for people who agree on a web may pick a different tree representation. These are my (oft repeated) general comments on the dangers of hierarchical classification as a general solution.

Effectively, when you use a topic name in a URI you are binding yourself to some classification. You may in the future prefer a different one. Then, the URI will be liable to break.

A reason for using a topic area as part of the URI is that responsibility for sub-parts of a URI space is typically delegated, and then you need a name for the organizational body - the subdivision or group or whatever - which has responsibility for that sub-space. This is binding your URIs to the organizational structure. It is typically safe only when protected by a date further up the URI (to the left of it): 1998/pics can be taken to mean for your server “what we meant in 1998 by pics”, rather than “what in 1998 we did with what we now refer to as pics.”

Don’t forget the domain name.

Remember that this applies not only to the “path” part of a URI but to the server name. If you have separate servers for some of your stuff, remember that that division will be impossible to change without destroying many many links. Some classic “look what software we are using today” domain names are “cgi.pathfinder.com”, “secure”, “lists.w3.org”. They are made to make administration of the servers easier. Whether it represents divisions in your company, or document status, or access level, or security level, be very, very careful before using more than one domain name for more than one type of document. remember that you can hide many web servers inside one apparent web server using redirection and proxying.

Oh, and do think about your domain name. If your name is not soap, will you want to be referred to as “soap.com” even when you have switched your product line to something else. (With apologies to whoever owns soap.com at the moment).

Conclusion

Keeping URIs so that they will still be around in 2, 20 or 200 or even 2000 years is clearly not as simple as it sounds. However, all over the Web, webmasters are making decisions which will make it really difficult for themselves in the future. Often, this is because they are using tools whose task is seen as to present the best site in the moment, and no one has evaluated what will happen to the links when things change. The message here is, however, that many, many things can change and your URIs can and should stay the same. They only can if you think about how you design them.

]]>
<p>This is a repost from <a href="https://www.w3.org/Provider/Style/URI">here</a> for my reference. Please go to the original post for the most up-to-date information.</p>
<blockquote>
<p>What makes a cool URI?<br>A cool URI is one which does not change.<br>What sorts of URI change?<br>URIs don’t change: people change them.</p>
</blockquote>
<p>There are no reasons at all in theory for people to change URIs (or stop maintaining documents), but millions of reasons in practice.</p>
Process 与 Threadhttp://hackjutsu.com/2016/09/29/Process 与 Thread/2016-09-30T01:00:00.000Z2016-09-29T22:41:46.000Z这是一篇关于Modern Operating System (4th Edition)的读书笔记。它摘录和总结了作者关于Process起源和其与Thread关系的一些思考。

1. 为了整合资源

一开始，CPU只有执行完一份完整的任务代码后才能执行下一份。后来通过把CPU时间分片，人们可以让多个任务看似同时执行起来。

人们为了更好地区分这些“同时”执行的任务以及整合各自资源，提出了process这个概念。

A process is basically a program in execution… It is fundamentally a container that holds all the information needed to run a program.

每个process都有独立的:

address space：a list of memory locations from 0 to some maximum, which the process can read and write.

resource：commonly including registers (including the program counter and stack pointer), a list of related processes, and all the other information needed to run the program.

process和process之间要通过IPC (inter-process communication) 来沟通。

2. 为了提高效率

原始的process只有一个thread of control来执行任务。后来人们发现如果一个process中能够有multiple threads of control，并且让它们共享process资源并相互协作，将会大大提高效率。据此，人们提出了thread这个概念。每个thread都拥有自己stack，用来记录执行历史。

Unlike different processes, which may be from different users and which may be hostile to one another, a process is always owned by a single user, who has presumably created multiple threads so that they can cooperate, not fight.

为什么不用multi-processes来协作呢？

…they are lighter weight than processes, they are easier (i.e., faster) to create and destroy than processes. In many systems, creating a thread goes 10-100 times faster than creating a process.

而且，process之间的资源共享和信息传递（IPC）不如thread高效（共享address space 和 resource）。

3. 总结

Process 这个模型体系由两个独立的概念组成：

resource grouping

execution

关于resource grouping：

One way of looking at a process is that it is a way to group related resources together. A process has an address space containing program text and data, as well as other resource. These resources may include open files, child processes, pending alarms, signal handlers, accounting information, and more. By putting them together in the form of a process, they can be managed more easily.

关于execution：

The other concept a process has is a thread of control, usually shortened to just thread. The thread has a program counter that keeps track of which instruction to execute next. It has registers, which hold its current working variables. It has a stack, which contains the execution history, with one frame for each procedure called but not yet returned from.

process 和 thread 虽然联系紧密，但从概念上区分的话，可以这么认为：

Processes are used to group resources together; threads are the entities scheduled for execution on the CPU.

This a repost of an artical from tutorials point. Please checkout original post for the most update-to-date information.

Unfortunately, not all computers store the bytes that comprise a multibyte value in the same order. Consider a 16-bit internet that is made up of 2 bytes. There are two ways to store this value.

Little Endian − In this scheme, low-order byte is stored on the starting address (A) and high-order byte is stored on the next address (A + 1).

Big Endian − In this scheme, high-order byte is stored on the starting address (A) and low-order byte is stored on the next address (A + 1).

To allow machines with different byte order conventions communicate with each other, the Internet protocols specify a canonical byte order convention for data transmitted over the network. This is known as Network Byte Order.

While establishing an Internet socket connection, you must make sure that the data in the sin_port and sin_addr members of the sockaddr_in structure are represented in Network Byte Order.

Byte Ordering Functions

Routines for converting data between a host’s internal representation and Network Byte Order are as follows −

Function

Description

htons()

Host to Network Short

htonl()

Host to Network Long

ntohl()

Network to Host Long

ntohs()

Network to Host Short

Listed below are some more detail about these functions −

unsigned short htons(unsigned short hostshort) − This function converts 16-bit (2-byte) quantities from host byte order to network byte order.

unsigned long htonl(unsigned long hostlong) − This function converts 32-bit (4-byte) quantities from host byte order to network byte order.

unsigned short ntohs(unsigned short netshort) − This function converts 16-bit (2-byte) quantities from network byte order to host byte order.

unsigned long ntohl(unsigned long netlong) − This function converts 32-bit quantities from network byte order to host byte order.

These functions are macros and result in the insertion of conversion source code into the calling program. On little-endian machines, the code will change the values around to network byte order. On big-endian machines, no code is inserted since none is needed; the functions are defined as null.

Program to Determine Host Byte Order

Keep the following code in a file byteorder.c and then compile it and run it over your machine.

In this example, we store the two-byte value 0x0102 in the short integer and then look at the two consecutive bytes, c[0] (the address A) and c[1] (the address A + 1) to determine the byte order.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

#include<stdio.h>

intmain(int argc, char **argv){

union {

short s;

char c[sizeof(short)];

}un;

un.s = 0x0102;

if (sizeof(short) == 2) {

if (un.c[0] == 1 && un.c[1] == 2)

printf("big-endian\n");

elseif (un.c[0] == 2 && un.c[1] == 1)

printf("little-endian\n");

else

printf("unknown\n");

}

else {

printf("sizeof(short) = %d\n", sizeof(short));

}

exit(0);

}

An output generated by this program on a Pentium machine is as follows −

1

2

3

4

$> gcc byteorder.c

$> ./a.out

little-endian

$>

Resource

]]>
<p><img src="http://i.imgur.com/Y4GHzVH.png" alt=""><br>
Difference between set, export and env in bashhttp://hackjutsu.com/2016/08/04/Difference between set, export and env in bash/2016-08-05T01:00:01.000Z2016-08-04T19:29:21.000ZWhat’s the difference between set, export and env and when should we use each?

Setting Variables

Let us consider a specific example. The grep command uses an environment variable called GREP_OPTIONS to set default options.

Now. Given that the file test.txt contains the following lines:

1

2

line one

line two

running the command grep one test.txt will return

1

line one

If you run grep with the -v option, it will return the non-matching lines, so the output will be

1

line two

We will now try to set the option with an environmental variable.

Environment variables set without export will not be inherited in the environment of the commands you are calling.

1

2

GREP_OPTIONS='-v'

grep one test.txt

The result:

1

line one

Obviously, the option -v did not get passed to grep.

You want to use this form when you are setting a variable only for the shell to use, for example in for i in * ; do you do not want to export $i.

However, the variable is passed on to the environment of that particular command line, so you can do

1

GREP_OPTIONS='-v' grep one test.txt

which will return the expected

1

line two

You use this form for temporarily change the environment of this particular instance of the program launched.

Exporting variables

Exporting a variable causes the variable to be inherited:

1

2

export GREP_OPTIONS='-v'

grep one test.txt

returns now

1

line two

This is the most common way of setting variables for use of subsequently started processes in a shell

Env

This was all done in bash. export is a bash builtin; VAR=whatever is bash syntax. env, on another hand, is a program in itself. When env is called, following things happen:

1. The command `env` gets executed as a new process
2. `env` modifies the environment, and
3. calls the command that was provided as an argument. The `env` process is replaced by the `command` process.

Example:

1

env GREP_OPTIONS='-v' grep one test.txt

This command will launch two new processes: (i) env and (ii) grep (actually, the second process will replace the first one). From the point of view of the grep process, the result is exactly the same as running

1

GREP_OPTIONS='-v' grep one test.txt

However, you can use this idiom if you are outside of bash or don’t want to launch another shell (for example, when you are using the exec() family of functions rather than the system() call).

Additional note on #!/usr/bin/env

This is also why the idiom #!/usr/bin/env interpreter is used rather than #!/usr/bin/interpreter. env does not require a full path to a program, because it uses the execvp() function which searches through the PATH variable just like a shell does, and then replaces itself by the command run. Thus, it can be used to find out where an interpreter (like perl or python) “sits” on the path.

It also means that by modifying the current path you can influence which python variant will be called. This makes the following possible:

1

2

3

4

echo-e'#!/usr/bin/bash\n\necho I am an evil interpreter!' > python

chmod a+x ./python

export PATH=.

calibre

instead of launching Calibre, will result in

I am an evil interpreter!

Resource

]]>
<p>What’s the difference between <code>set</code>, <code>export</code> and <code>env</code> and when should we use each?<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><div class="line">1</div><div class="line">2</div><div class="line">3</div></pre></td><td class="code"><pre><div class="line">key=value</div><div class="line">env key=value</div><div class="line"><span class="built_in">export</span> key=value</div></pre></td></tr></table></figure></p>
Load Average on Unix-like Systemshttp://hackjutsu.com/2016/06/21/Load Average on Unix-like Systems/2016-06-22T01:00:01.000Z2016-06-22T00:34:31.000Z

Linux, Mac, and other Unix-like systems display “load average” numbers. These numbers tell you how busy your system’s CPU, disk, and other resources are. They’re not self-explanatory at first, but it’s easy to become familiar with them.Whether you’re using a Linux desktop or server, a Linux-based router firmware, a NAS system based on Linux or BSD, or even Mac OS X, you’ve probably seen a “load average” measurement somewhere.

Load vs Load Average

On Unix-like systems, including Linux, the system load is a measurement of the computational work the system is performing. This measurement is displayed as a number. A completely idle computer has a load average of 0. Each running process either using or waiting for CPU resources adds 1 to the load average. So, if your system has a load of 5, five processes are either using or waiting for the CPU.

Unix systems traditionally just counted processes waiting for the CPU, but Linux also counts processes waiting for other resources — for example, processes waiting to read from or write to the disk.

On its own, the load number doesn’t mean too much. A computer might have a load of 0 one split-second, and a load of 5 the next split-second as several processes use the CPU. Even if you could see the load at any given time, that number would be basically meaningless.

That’s why Unix-like systems don’t display the current load. They display the load average — an average of the computer’s load over several periods of time. This allows you to see how much work your computer has been performing.

Finding the Load Average

The load average is shown in many different graphical and terminal utilities, including in the top command and in the graphical GNOME System Monitor tool. However, the easiest, most standardized way to see your load average is to run the uptime command in a terminal. This command shows your computer’s load average as well as how long it’s been powered on.

The uptime command works on Linux, Mac OS X, and other Unix-like systems.

Understanding the Load Average

The first time you see a load average, the numbers look fairly meaningless. Here’s an example load average readout:

1

load average: 1.05, 0.70, 5.09

From left to right, these numbers show you the average load over the last one minute, the last five minutes, and the last fifteen minutes. In other words, the above output means:

1

2

3

4

5

load average over the last 1 minute: 1.05

load average over the last 5 minutes: 0.70

load average over the last 15 minutes: 5.09

The time periods are omitted to save space. Once you’re familiar with the time periods, you can quickly glance at the load average numbers and understand what they mean.

What do the numbers mean exactly?

Let’s use the above numbers to understand what the load average actually means. Assuming you’re using a single-CPU system, the numbers tell us that:

1

2

3

4

5

over the last 1 minute: The computer was overloaded by 5% on average. On average, .05 processes were waiting for the CPU. (1.05)

over the last 5 minutes: The CPU idled for 30% of the time. (0.70)

over the last 15 minutes: The computer was overloaded by 409% on average. On average, 4.09 processes were waiting for the CPU. (5.09)

You probably have a system with multiple CPUs or a multi-core CPU. The load average numbers work a bit differently on such a system. For example, if you have a load average of 2 on a single-CPU system, this means your system was overloaded by 100 percent — the entire period of time, one process was using the CPU while one other process was waiting. On a system with two CPUs, this would be complete usage — two different processes were using two different CPUs the entire time. On a system with four CPUs, this would be half usage — two processes were using two CPUs, while two CPUs were sitting idle.

To understand the load average number, you need to know how many CPUs your system has. A load average of 6.03 would indicate a system with a single CPU was massively overloaded, but it would be fine on a computer with 8 CPUs.

The load average is especially useful on servers and embedded systems. You can glance at it to understand how your system is performing. If it’s overloaded, you may need to deal with a process that’s wasting resources, provide more hardware resources, or move some of the workload to another system.

How To Add Swap Files on Ubuntu 14.04

One of the easiest way of increasing the responsiveness of your server and guarding against out of memory errors in your applications is to add some swap space. Swap is an area on a hard drive that has been designated as a place where the operating system can temporarily store data that it can no longer hold in RAM.

Create a Swap File

One quick way of getting the same file is by using the fallocate program. This command creates a file of a preallocated size instantly, without actually having to write dummy contents.

1

sudo fallocate -l 4G /swapfile

The prompt will be returned to you almost immediately. We can verify that the correct amount of space was reserved by typing:

1

ls -lh /swapfile

1

-rw-r--r-- 1 root root 4.0G Apr 28 17:19 /swapfile

As you can see, our file is created with the correct amount of space set aside.

Enabling the Swap File

Right now, our file is created, but our system does not know that this is supposed to be used for swap. We need to tell our system to format this file as swap and then enable it.

Before we do that though, we need to adjust the permissions on our file so that it isn’t readable by anyone besides root. Allowing other users to read or write to this file would be a huge security risk. We can lock down the permissions by typing:

1

sudo chmod 600 /swapfile

Verify that the file has the correct permissions by typing:

1

ls -lh /swapfile

1

-rw------- 1 root root 4.0G Apr 28 17:19 /swapfile

As you can see, only the columns for the root user have the read and write flags enabled.

Now that our file is more secure, we can tell our system to set up the swap space by typing:

1

sudo mkswap /swapfile

1

2

Setting up swapspace version 1, size = 4194300 KiB

no label, UUID=e2f1e9cf-c0a9-4ed4-b8ab-714b8a7d6944

Our file is now ready to be used as a swap space. We can enable this by typing:

1

sudo swapon /swapfile

We can verify that the procedure was successful by checking whether our system reports swap space now:

1

sudo swapon -s

1

2

Filename Type Size Used Priority

/swapfile file 4194300 0 -1

We have a new swap file here. We can use the free utility again to corroborate our findings:

1

free -m

1

2

3

4

total used free shared buffers cached

Mem: 3953 101 3851 0 5 30

-/+ buffers/cache: 66 3887

Swap: 4095 0 4095

Our swap has been set up successfully and our operating system will begin to use it as necessary.

Make the Swap File Permanent

We have our swap file enabled, but when we reboot, the server will not automatically enable the file. We can change that though by modifying the /etc/fstab file.

Edit the file with root privileges in your text editor. At the bottom of the file, you need to add a line that will tell the operating system to automatically use the file you created:

Note: This post is modified from Shawn Tylor‘s answer on StackOverflow. Please refer to the original link or the Bootstrap documentation for more details.

Here’s an attempt at a simple explanation for Bootstrap Grid system.

Ignoring the letters (xs, sm, md, lg) for now, I’ll start with just the numbers…

The numbers (1-12) represent a portion of the total width of any div all divs are divided into 12 columns. So, col-*-6 spans 6 of 12 columns (half the width), col-*-12 spans 12 of 12 columns (the entire width), etc. If you want two equal columns to span a div, write

1

2

<divclass="col-xs-6">Column 1</div>

<divclass="col-xs-6">Column 2</div>

Of if you want three unequal columns to span that same width, you could write:

1

2

3

<divclass="col-xs-2">Column 1</div>

<divclass="col-xs-6">Column 2</div>

<divclass="col-xs-4">Column 3</div>

You’ll notice the # of columns always add up to 12. It can be less than 12, but beware if more than 12, as your offending divs will bump down to the next row (not .row, which is another story altogether).

You can also nest columns within columns, (best with a .row wrapper around them) such as:

1

2

3

4

5

6

7

8

9

10

11

12

<divclass="col-xs-6">

<divclass="row">

<divclass="col-xs-4">Column 1-a</div>

<divclass="col-xs-8">Column 1-b</div>

</div>

</div>

<divclass="col-xs-6">

<divclass="row">

<divclass="col-xs-2">Column 2-a</div>

<divclass="col-xs-10">Column 2-b</div>

</div>

</div>

Each set of nested divs also span up to 12 columns of their parent div. Since each .col class has 15px padding on either side, you should usually wrap nested columns in a .row, which has -15px margins. This avoids duplicating the padding, and keeps the content lined up between nested and non-nested col classes.

– You didn’t specifically ask about the xs, sm, md, lg usage, but they go hand-in-hand so I can’t help but touch on it…

In short, they are used to define at which screen size that class should apply:

You should usually classify a div using multiple column classes so it behaves differently depending on the screen size (this is the heart of what makes bootstrap responsive). eg: a div with classes col-xs-6 and col-sm-4 will span half the screen on mobile phone (xs) and 1/3 of the screen on tablets(sm).

1

2

3

4

<!-- 1/2 width on mobile, 1/3 screen on tablet) -->

<divclass="col-xs-6 col-sm-4">Column 1</div>

<!-- 1/2 width on mobile, 2/3 width on tablet -->

<divclass="col-xs-6 col-sm-8">Column 2</div>

NOTE: Grid classes for a given screen size apply to that screen size and larger unless another declaration overrides it (i.e. col-xs-6 col-md-4 spans 6 columns on xs and sm, and 4 columns on md and lg, even though sm and lg were never explicitly declared).

If you don’t define xs, it will default to col-xs-12 (i.e. col-sm-6 is half the width on sm, md and lg screens, but full-width on xs screens).

It’s actually totally fine if your .row includes more than 12 cols, as long as you are aware of how they will react. –This is a contentious issue, and not everyone agrees.

Kendo Kata are fixed patterns that teach kendoka (kendo practitioners) the basic elements of swordsmanship. There are two roles, uchidachi (打太刀), the teacher, and shidachi (仕太刀), the student. Kata were originally used to preserve the techniques and history of kenjutsu for future generations. Modern usage of kata is as a teaching tool to learn strike techniques, attack intervals, body movement, sincerity and kigurai (pride).

There are two types of Kendo kata. Nihon Kendo Kata was first finalized in 1912, in which the first seven kata use tachi(a long bokken) for both student and teacher, and the last three kata use tachi for the teacher and kodachi(a shorter bokken) for student.

Nihon Kendo Kata receives criticism for continued usage of outdated forms. For example, kodachi are no longer used except when wielding two swords. This led to the development of the second type, Bokuto Ni Yoru Kendo Kihon-waza Keiko-ho.

Bokuto Ni Yoru Kendo Kihon-waza Keiko-ho is a new form of bokken training that is directly translatable to bogu Kendo. The first four waza are focused on attacking initiaion techniques, while the last five are focused on techniques for responding to an attack. Here is the technique table for Bokuto Ni Yoru Kendo Kihon-waza Keiko-ho.

Name and Technique (Kihon)

Strikes Used

Ippon-uchi no waza

Men, Kote, Dō, Tsuki

Ni/Sandan no waza

Kote, Men

Harai waza

Harai Men

Hiki waza

Tsubazeriai kara no Hiki Doh

Nuki waza

Men, Nuki Doh

Suriage waza

Kote, Suriage Men

Debana waza

Debana kote

Kaeshi waza

Men, Kaeshi Migi-Doh

Uchiotoshi waza

Doh uchiotoshi Men

Ippon-uchi no waza

Action:Single cuts: Men, Kote, Dō, Tsuki

Ni/Sandan no waza

Action:Two continuous cuts: Kote and Men

Harai waza

Actions:Harai waza: Harai Men (using omote the left side of your sword)

Hiki waza

Action:Hiki waza: Hiki Dō (the right dō)

Nuki waza

Action:Nuki waza: Men Nuki Dō (the right dō)

Suriage waza

Action:Suriage waza: Kote Suriage Men(using ura the right side of your sword)

Debana waza

Action:Debana waza: Debana Kote

Kaeshi waza

Action:Kaeshi waza: Men Kaeshi Dō (the right dō)

Uchiotoshi waza

Action:Uchiotoshi waza: Dō Uchiotoshi Men (the right dō)

Resource

Disclaimer: All content provided on this Hackjutsu Dojo blog is for informational purposes only. The owner of this blog makes no representations as to the accuracy or completeness of any information on this site or found by following any link on this site.

]]>
<figure><br> <img src="http://i.imgur.com/TV7MAIx.png" style="max-height: 400px;"/><br> <figurecaption><i>Photo courtesy of <a href="http://kendomonochrome.com">Kendo Monochrome</a></i></figurecaption><br></figure>
<p><strong>Kendo Kata</strong> are fixed patterns that teach kendoka (kendo practitioners) the basic elements of swordsmanship. There are two roles, uchidachi (打太刀), the teacher, and shidachi (仕太刀), the student. Kata were originally used to preserve the techniques and history of kenjutsu for future generations. Modern usage of kata is as a teaching tool to learn strike techniques, attack intervals, body movement, sincerity and kigurai (pride).</p>
Hacking Your Customer Interviewhttp://hackjutsu.com/2016/03/22/Hacking your customer interview/2016-03-23T01:00:00.000Z2016-03-22T23:25:12.000Z

Too many startups begin with an idea for a product that they think people want. They then spend months, sometimes years, perfecting that product without ever showing the product, even in a very rudimentary form, to the prospective customer. When they fail to reach broad uptake from customers, it is often because they never spoke to prospective customers and determined whether or not the product was interesting. When customers ultimately communicate, through their indifference, that they don’t care about the idea, the startup fails. – By Eric Ries

Following the Lean Startup principles, I have been doing some research on how to interview with our customers for one of my hack week projects. Here are my notes for the research. Most of the content comes from Customer Developer Lab, which is found by Justin Wilcox.

Ground Rules for Interviewing

Before we discuss about how to interview, let’s talk about what we should not do during an interview.

No Pitching

This is about listening. If you find yourself trying to propose an idea and want to get feedback from it, STOP. This is pitching. It changes your mind from learning and absorbing information into trying to pitching something and sell a product. But our goal is all about learning and listening to your customers.

NOT Ask Questions about Future

Do not ask hypothetical questions about the future, like “Would you…” or “Will you…”. Instead, ask questions like “Have you ever…” or “Tell me about the last time…”. The reasons are if we are asking our customers about future, we are getting our customers’ predictions which are basically useless. Most customers don’t know their answers for the future and if they say something we happen to want to hear, we will be misled. Another reason is that if we begin to ask questions like “Would you…” or “Will you…”, we are actually pitching.

What to ask?

Customer Interview Script

Tell me a story about the last time <problem context>…

What was hardest?

Why was that hard?

How do you solve it now?

Why is that not awesome?

The most trickiest part above is how to define the <problem context> for question #1. Let’s talk about an example shown in Justin Wilcox’s blog. Assuming we want to build a vegetarian Yelp, we don’t want the problem to be so specific that everyone could guess what you want to solve , like:

Don’t ask: “What’s the hardest part about finding a good vegetarian restaurant in a new city?”

And we don’t want to be so broad that we are inviting discussion about a range of problems we have no interest to solve:

Don’t ask: “What’s the hardest part about eating out as a vegetarian?”

We need to ask about a significant problem context:

Ask: “What’s the hardest part about eating out as a vegetarian?”

The answers to the last question will help us to validate our hypothesis, if they don’t, will point us to one if they do have.

Bonus Points

Fulfilling the script above is just the basics for the customer interview. Bonus will be given if we can achieve some of the points below.

EmotionsObserve your customers’ emotion when they are talking about this problem. Try to understand them.

Resource

]]>
<p><img src="http://i.imgur.com/bD9CTcq.jpg" style="max-height: 380px;"/></p>
<blockquote>
<p>Too many startups begin with an idea for a product that they think people want. They then spend months, sometimes years, perfecting that product without ever showing the product, even in a very rudimentary form, to the prospective customer. When they fail to reach broad uptake from customers, it is often because they never spoke to prospective customers and determined whether or not the product was interesting. When customers ultimately communicate, through their indifference, that they don’t care about the idea, the startup fails. <em>– By Eric Ries</em></p>
</blockquote>
A Cup of Git Lattehttp://hackjutsu.com/2016/03/12/A Cup of Git Latte/2016-03-13T02:00:01.000Z2017-02-10T18:57:15.000Z

More about fetch, merge, pull

More about fetch

git fetch fetches branches and/or tags (collectively, refs) from one or more other repositories, along with the objects necessary to complete their histories. Remote-tracking branches are updated.

When no remote is specified, by default the origin remote will be used, unless there’s an upstream branch configured for the current branch.

The names of refs that are fetched, together with the object names they point at, are written to .git/FETCH_HEAD.This information may be used by scripts or other git commands, such as git-pull. The FETCH_HEAD is just a reference to the tip of the last fetch, whether that fetch was initiated directly using the fetch command or as part of a pull.

branch.< name >.fetch

When we have the branch.<name>.fetch set as:

1

[remote "origin"] fetch = +refs/heads/*:refs/remotes/origin/*

This configuration is used in two ways:

Without specifying branches

1

git fetch origin

The above command copies all branches from the remote refs/heads/ namespace and stores them to the local refs/remotes/origin/ namespace, unless the branch.<name>.fetch option is used to specify a non-default refspec.

Specifying branches

1

git fetch origin master

This command will fetch only the master branch. The remote.<repository>.fetch values determine which remote-tracking branch, if any, is updated.

More about merge

git merge —— Join two or more development histories together

git merge incorporates changes from the named commits (since the time their histories diverged from the current branch) into the current branch.

Assume the following history exists and the current branch is “master”:

A---B---C topic
/
D---E---F---G master

Then git merge topic will replay the changes made on the topic branch since it diverged from master (E) until its current commit (C) on top of master, and record the result in a new commit (H) along with the names of the two parent commits and a log message from the user describing the changes.

A---B---C topic
/ \
D---E---F---G---H master

Pre-merge checks

Before performing any merge, we should make sure our codes are in good shape and commit all local changes. git pull and git merge will stop without doing anything when local uncommitted changes overlap with files that git pull/git merge may need to update.

To avoid recording unrelated changes in the merge commit, git pull and git merge will also abort if there are any changes registered in the index relative to the HEAD commit. (One exception is when the changed index entries are in the state that would result from the merge already.)

If all named commits are already ancestors of HEAD, git merge will exit early with the message “Already up-to-date.”

Fast-forward merge

Often the current branch head is an ancestor of the named commit. This is the most common case especially when invoked from git pull: we are tracking an upstream repository, we have committed no local changes, and now we want to update to a newer upstream revision. In this case, a new commit is not needed to store the combined history; instead, the HEAD (along with the index) is updated to point at the named commit, without creating an extra merge commit.

This behavior can be suppressed with the --no-ff option.

True merge

Except in a fast-forward merge (see above), the branches to be merged must be tied together by a merge commit that has both of them as its parents.

A merged version reconciling the changes from all branches to be merged is committed, and our HEAD, index, and working tree are updated to it. It is possible to have modifications in the working tree as long as they do not overlap; the update will preserve them.

When it is not obvious how to reconcile the changes, the following happens:

The HEAD pointer stays the same.

The MERGE_HEAD ref is set to point to the other branch head.

Paths that merged cleanly are updated both in the index file and in our working tree.

For conflicting paths, the index file records up to three versions: stage 1 stores the version from the common ancestor, stage 2 from HEAD, and stage 3 from MERGE_HEAD (we can inspect the stages with git ls-files -u). The working tree files contain the result of the “merge” program; i.e. 3-way merge results with familiar conflict markers <<<===>>>.

No other changes are made. In particular, the local modifications we had before we started merge will stay the same and the index entries for them stay as they were, i.e. matching HEAD.

If we tried a merge which resulted in complex conflicts and want to start over, we can recover with git merge --abort.

Resolve conflicts

After seeing a conflict, we can do two things:

Decide not to merge. The only clean-ups we need are to reset the index file to the HEAD commit to reverse 2. and to clean up working tree changes made by 2. and 3.; git merge --abort can be used for this.

Resolve the conflicts. Git will mark the conflicts in the working tree. Edit the files into shape and git add them to the index. Use git commit to seal the deal.

More about pull

git pull —— Fetch from and integrate with another repository or a local branch

git pull incorporates changes from a remote repository into the current branch. In its default mode, git pull is shorthand for git fetch followed by git merge FETCH_HEAD.

More precisely, git pull runs git fetch with the given parameters and calls git merge to merge the retrieved branch heads into the current branch. See the More about config variables for more details.

Update the remote-tracking branches for the repository we cloned from, then merge one of them into our current branch:

1

git pull, git pull origin

Normally the branch merged in is the HEAD of the remote repository, but the choice is determined by the branch.<name>.remote and branch.<name>.merge options.

To merge a specific remote branch next into our current branch, we can run:

1

git pull origin next

This leaves a copy of next temporarily in FETCH_HEAD, but does not update any remote-tracking branches. Using remote-tracking branches, the same can be done by invoking fetch and merge:

1

2

git fetch origin

git merge origin/next

If we tried a pull which resulted in complex conflicts and would want to start over, we can recover with git reset.

More about config variables

branch.<name>.mergeDefines, together with branch.<name>.remote, the upstream branch for the given branch. It tells git fetch/git pull/git rebase which branch to merge and can also affect git push (see push.default). When in branch <name>, it tells git fetch the default refspec to be marked for merging in FETCH_HEAD.

branch.<name>.pushRemoteWhen on branch <name>, it overrides branch.<name>.remote for pushing. It also overrides remote.pushDefault for pushing from branch <name>. When we pull from one place (e.g. our upstream) and push to another place (e.g. our own publishing repository), we would want to set remote.pushDefault to specify the remote to push to for all branches, and use this option to override it for a specific branch.

remote.pushDefaultThe remote to push to by default. Overrides branch.<name>.remote for all branches, and is overridden by branch.<name>.pushRemote for specific branches.

Visualize the workflowBy creating a visual model of your work and workflow, you can observe the flow of work moving through your Kanban system. Making the work visible—along with blockers, bottlenecks and queues—instantly leads to increased communication and collaboration.

Limit Work in ProcessBy limiting how much unfinished work is in process, you can reduce the time it takes an item to travel through the Kanban system. You can also avoid problems caused by task switching and reduce the need to constantly reprioritize items.

Focus on FlowBy using work-in-process (WIP) limits and developing team-driven policies, you can optimize your Kanban system to improve the smooth flow of work, collect metrics to analyze flow, and even get leading indicators of future problems by analyzing the flow of work.

Continuous ImprovementOnce your Kanban system is in place, it becomes the cornerstone for a culture of continuous improvement. Teams measure their effectiveness by tracking flow, quality, throughput, lead times and more. Experiments and analysis can change the system to improve the team’s effectiveness.

Java is a multi threaded programming language. A multi-threaded program contains two or more parts that can run concurrently and each part can handle different task at the same time making optimal use of the available resources specially when your computer has multiple CPUs.

In this post, we will discover how to write effective and efficient multi threaded program in Java.

Thread Basics

Creating and Starting Threads

Java Threads have to be instances of java.lang.Thread or instances of subclasses of this class. Creating and starting a thread can be simply done like this:

1

2

Thread thread = new Thread();

thread.start(); // NOT the run() method!!

The codes above don’t have any specific codes to run. The thread stops right after it starts. To specify some logic for the new thread, we either subclass the Thread or pass an implementation of java.lang.Runnable to the Thread‘s constructor.

Subclassing Thread

Subclass the Thread and override its run() method.

1

2

3

4

5

6

publicclassCuteThreadextendsThread{

publicvoidrun(){

System.out.println("CuteThread is running~");

}

}

To start the thread:

1

2

CuteThread cuteThread = new CuteThread();

cuteThread.start();

Implementing Runnable

1

2

3

4

5

6

Thread thread = new Thread(new Runnable() {

publicvoidrun(){

System.out.println("MyRunnable running");

}

}).start();

Implementing Runnable is the preferred way to run specific codes on a new thread, since we are not specializing the thread’s interface.

Get the Current Thread

Thread.currentThread() returns a reference to the Thread instance executing the currentThread().

Name of a Thread

We can assign a name to a Java Thread by passing it to the constructor. We can retrieve the name by calling getName().

1

2

3

4

5

MyRunnable runnable = new MyRunnable();

Thread thread = new Thread(runnable, "New Thread");

thread.start();

System.out.println(thread.getName());

Pausing Execution with Sleep

Thread.sleep() causes the current thread to suspend execution for a specified period.

1

2

// pause for 4 seconds

Thread.sleep(4000);

Joining a Thread

If join() is called on a Thread instance, the currently running thread will block until the Thread instance has finished executing.

1

2

3

4

5

6

// The current thread will be blocked until threadA finishes

threadA.join();

// The current thread waits at most 2000ms

threadB.join(2000);

// The current thread waits at most 2000ms + 100ns

threadC.join(2000, 100);

Yielding a Thread

According to the Java documentation, yield() is:

a hint to the scheduler that the current thread is willing to yield its current use of a processor.

Let’s use the following snippet as an example.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

publicclassHelloWorld{

publicstaticvoidmain(String[] args)throws InterruptedException {

Thread myThread = new Thread() {

publicvoidrun(){

System.out.println("Hello from new thread");

}

};

myThread.start();

Thread.yield();

System.out.println("Hello from main thread");

myThread.join();

}

}

Without the call to yield(), the startup overhead of the new thread would mean that the main thread would almost certainly get to its println() first, although it is not guaranteed to be the case.

Java Volatile Keyword

Let’s look at an example. In the following codes, we start thread_B from thread_A. Then we send a stop signal from thread_A to thread_B to stop the latter thread.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

classProcessorextendsThread{

privateboolean running = true; // Pitfall!!

publicvoidrun(){

while(running) {

System.out.println("Hello");

try {

Thread.sleep(100);

} catch (InterruptedException e) {

e.printStackTrace();

}

}

}

publicvoidshutdown(){

running = false;

}

}

publicclassBasicSync{

publicstaticvoidmain(String[] args){

// Start thread_B

Processor proc = new Processor();

proc.start();

System.out.println("Please enter return key to stop...");

Scanner scanner = new Scanner(System.in);

scanner.nextLine();

// Send shutdown signal from thread_A to thread_B

proc.shutdown();

}

}

The program looks fine at the first glance, but actually it could fail to stop the thread_B depending on how the compiler optimizes the program. In some compiler, while(running){...} in Processor could be optimized as while(true){...}. The compiler has no idea running would be changed by other thread, and it optimizes running to true according to its knowledge.

To avoid this, we need to declare the running as volatile, which means the runningvariable could be changed by the codes outside and the compiler should not optimize it.

1

privatevolatileboolean running = true;

Thread pools with the Executor Framework

Runnable

Executors framework is used to run the Runnable objects without creating new threads every time and mostly re-using the already created threads. There is a thread pool managing a pool of worker thread. Each submittd task to the thread pool will enters a queue waiting to be executed.

A thread pool can be described as a collection of Runnable objects (work queue) and a connections of running threads. These threads are constantly running and are checking the work query for new work. If there is new work to be done they execute this Runnable. The Thread class itself provides a method, e.g. execute(Runnable r) to add a new Runnable object to the work queue. – By vogella Java Tutorial

A thread pool is represented by an instance of the class ExecutorService. With the ExecutorService instance, we can submit tasks to be executed in the future.

In the codes above 10 Runnableinstances will be submitted to a thread pool with the size 4. We are responsible to shutdown the thread pool in order to terminate all the threads, or the JVM will risk not to shutdown.

1

2

3

4

5

// This will make the executor accept no new threads

// and finish all existing threads in the queue

pool.shutdown();

// Wait until all threads are finish

pool.awaitTermination();

We can also force the shutdown of the pool using shutdownNow(), with that the currently running tasks will be interrupted and the tasks not started will be returned.

Futures and Callables

The Executor framework works with a Runnable instance as shown above. However, Runnable cannot return a result to the caller. To get the computed result, Java provides the Callable interface.

The Callable object uses generics to define the return value.

1

2

3

4

5

6

7

8

9

10

11

publicclassMyCallableimplementsCallable<Integer> {

@Override

public Integer call()throws Exception {

int sum = 0;

for (long i = 0; i <= 100; i++) {

sum += i;

}

return sum;

}

}

When we submit a Callable instance to the thread pool, we will get a Future object, which exposes methods for us to monitor the progress that the task being executed.

1

2

3

ExecutorService executor = Executors.newFixedThreadPool(5);

Future<Integer> future = executor.submit(new MyCallable());

int result = future.get();

The Future‘s get()will waits if necessary for the computation to complete, and then retrieves the result. Here is a list of methods provided by Future:

boolean cancel(boolean mayInterruptIfRunning)

Attempts to cancel execution of this task.

V get()

Waits if necessary for the computation to complete, and then retrieves its result.

V get(long timeout, TimeUnit unit)

Waits if necessary for at most the given time for the computation to complete, and then retrieves its result, if available.

boolean isCancelled()

Returns true if this task was cancelled before it completed normally.

boolean isDone()

Returns true if this task completed.

Note: Check out the Oracle documentations for more about Callable and Future.

Java 8’s CompletableFuture

CompletableFuture extends the functionality of the Future interface with the possibility to notify the caller once a task is done by utilizing function-style callbacks.

Synchornized Keyword

The Java synchronizedkeyword serves as Java’s intrinsic locks. It marks a Java block or method as synchronized to avoid race conditions. These synchronized blocks or methods only allow one thread executing their codes at one time. As summarized by Jakob Jenkov , the synchronized keyword can be used to mark four different types of blocks:

Instance methods

Static methods

Code blocks inside instance methods

Code blocks inside static methods

Synchronized Instance Methods

A synchronized instance method in Java is synchronized on the instance (object) owning the method. Only one thread can execute the synchronized method on the same instance at one time.

1

2

3

publicsynchronizedvoidadd(int value){

this.count += value;

}

Synchronized Static Methods

Only one thread can execute inside a static synchronized method in the same class.

1

2

3

publicstaticsynchronizedvoidadd(int value){

count += value;

}

Note that declaring a method as synchronized is just syntactic sugar for surround the method’s body with the following:

1

2

3

synchronized(this) {

<<method body>>

}

Synchronized Blocks in Instance Methods

Sometimes, we don’t need to synchronize the whole method, insteads, we can only synchronize a block of codes.

1

2

3

4

5

6

7

publicvoidadd(int value){

// Some codes before the synchronized block

synchronized(this){

this.count += value;

}

// Some codes after the synchronized block

}

The object taken in the parentheses by the synchronized construct is called a monitor object. Only one thread can execute inside a Java code block synchronized on the same monitor object. In the codes below, the synchroinzed codes take this as the monitor object.

Synchronized Blocks in Static Methods

Only one thread can execute inside the synchronized block in the same class (MyClass.classin the codes below).

1

2

3

4

5

6

7

8

9

10

publicclassMyClass{

publicstaticvoidlog(String msg1, String msg2){

// Some codes before the synchronized block

synchronized(MyClass.class){

System.out.println("Hello World!");

}

// Some codes after the synchronized block

}

}

Thread Signaling

As the name suggested, thread signaling should enable threads to send signals to each other. At the same time, it should also allow threads to wait signals from other threads.

Busy Waiting

The most intuitive way for thread signaling is let threads send signals to and retrieve signals from a shared object.

1

2

3

4

5

6

7

8

9

10

11

12

publicclassSharedSignal{

protectedboolean mShouldContinue = false;

publicsynchronizedbooleanshouldContinue(){

return mShouldContinue;

}

publicsynchronizedvoidsetShouldContinue(booleancontinue){

mShouldContinue = continue;

}

}

Thread A could do a busy waiting for Thread B to signal the SharedSignal object.

1

2

3

4

5

6

protected SharedSignal signal = new SharedSignal();

// Some codes here

while(!signal.shouldContinue()) {

// busy waiting

}

wait(), notify() and notifyAll()

The busy waiting consumes the CPU while waiting, which is not very efficient. Java Object has a built-in mechanism for a more smarter wait. The thread will sleep while waiting until some other thread sends a signal to wait it up.

Object defines three methods wait(), notify() and notifyAll() to facilitate this smart wait.

A thread that calls wait() on any object becomes inactive until another thread calls notify() on that object. In order to call either wait() or notify the calling thread must first obtain the lock on that object. In other words, the calling thread must call wait() or notify() from inside a synchronized block.

Once a thread calls wait() it releases the lock it holds on the monitor object. Once a thread is awakened it cannot exit the wait() call until the thread calling notify() has left its synchronized block. If multiple threads are awakened using notifyAll() only one awakened thread at a time can exit the wait() method, since each thread must obtain the lock on the monitor object in turn before exiting wait().

Always use while(!pizzaArrized) insteads of if(!pizzaArrived) to avoid the suspicious wake up.

We must hold the lock (synchronized) before invoking wait/nofity. Threads also have to acquire lock before waking.

Try to avoid acquiring any lock within your synchronized block and strive to not invoke alien methods (methods you don’t know for sure what they are doing). If you have to, make sure to take measures to avoid deadlocks.

Be careful with notify(). Stick with notifyAll() until you know what you are doing.

Note: Don’t call wait() on constant Strings or global objects!! The JVM/Compiler internally translates constant strings into the same object.

Re-entrant Locks and Condition Variables

In Java 5.0, a new addition called ReentrantLock was made to enhance intrinsic locking capabilities. Prior to this, synchronized and volatile were the means for achieving concurrency.

Re-entrant Locks and synchroinzed

The synchronized uses intrinsic locks or monitors, this article gives insightful comparation between the intrinsic locking mechanism and the Re-eantrant lock mechanism. In short……

The main difference between synchronized and ReentrantLock is ability to trying for lock interruptibly, and with timeout.

ReentrantLock is a concrete implementation of Lock interface. It is mutual exclusive lock, similar to implicit locking provided by synchronized keyword in Java, with extended feature like fairness, which can be used to provide lock to longest waiting thread. Lock is acquired by lock() method and held by Thread until a call to unlock() method. Fairness parameter is provided while creating instance of ReentrantLock in constructor. ReentrantLock provides same visibility and ordering guarantee, provided by implicitly locking, which means, unlock() happens before another thread get lock().

Note that since the lock is not automatically released when the method exits, you should wrap the lock() and the unlock() methods in a try/finally clause.

Conditional Variable

The Condition interface factors out the java.lang.Object monitor methods wait()/notify()/notifyAll() into distinct objects to give the effect of having multiple wait-sets per object, by combining them with the use of arbitrary Lock implementations. Where Lock replaces synchronized methods and statements, Condition replaces Object monitor methods.

Note: The main difference between synchroinzed/wait/notify and Lock is Lock API isn’t block bound and we can have many groups of wait/notify by using many Condition instances.

Semaphores

The java.util.concurrent.Semaphore class is a counting semaphore. That means that it has two main methods:

acquire()

release()

The counting semaphore is initialized with a given number of “permits”. For each call to acquire() a permit is taken by the calling thread. For each call to release() a permit is returned to the semaphore. Thus, at most N threads can pass the acquire() method without any release() calls, where N is the number of permits the semaphore was initialized with. The permits are just a simple counter.

Semaphore Usage

As semaphore typically has two uses:

To guard a critical section against entry by more than N threads at a time.

To send signals between two threads.

Guarding Critical Sections

If we use a semaphore to guard a critical section, the thread trying to enter the critical section will typically first try to acquire a permit, enter the critical section, and then release the permit again after. Like this:

1

2

3

4

5

6

7

8

Semaphore semaphore = new Semaphore(1);

//critical section

semaphore.acquire();

...

semaphore.release();

Sending Signals Between Threads

If we use a semaphore to send signals between threads, then we would typically have one thread call the acquire() method, and the other thread to call the release() method.

If no permits are available, the acquire() call will block until a permit is released by another thread. Similarly, a release() calls is blocked if no more permits can be released into this semaphore.

Fairness

No guarantees are made about fairness of the threads acquiring permits from the Semaphore. That is, there is no guarantee that the first thread to call acquire() is also the first thread to obtain a permit.

To enforce fairness, the Semaphore class has a constructor that takes a boolean telling if the semaphore should enforce fairness.

Blocking Queue

BlockingQueue is a queue interface which is thread safe to insert or retrieve elements from it, which is a nice candidate for concurrent development. Here is an example about utilizing BlockingQueue for a Producer-Consumer pattern.

Methods

Throws Exception

Special Value

Blocks

Times Out

Insert

add(o)

offer(o)

put(o)

offer(o, timeout, timeunit)

Remove

remove(o)

poll(o)

take()

poll(timeout, timeunit)

Examine

element()

peek()

N/A

N/A

Throws ExceptionIf the attempted operation is not possible immediately, an exception is thrown.

Special ValueIf the attempted operation is not possible immediately, a special value is returned (often true / false).

BlocksIf the attempted operation is not possible immedidately, the method call blocks until it is.

Times OutIf the attempted operation is not possible immedidately, the method call blocks until it is, but waits no longer than the given timeout. Returns a special value telling whether the operation succeeded or not (typically true / false).

ConcurrentHashMap

ConcurrentHashMap performs better than Hashtable or synchronized Map because it only locks a portion of Map.

When Java was young Doug Lea wrote the seminal book Concurrent Programming in Java. Along with the book he developed several thread-safe collection, which later became part of the JDK in the java.util.concurrent package. The collections in that package are safe for multithreaded situations and they perform well. In fact, the ConcurrentHashMap implementation performs better than HashMap in nearly all situations. It also allows for simultaneous concurrent reads and writes, and it has methods supporting common composite operations that are otherwise not thread safe. If Java 5 is the deployment environment, start with ConcurrentHashMap.

Reference

]]>
<p><img src="http://i.imgur.com/LOBPL1M.jpg" style="max-height: 350px;"/></p>
<p>Java is a multi threaded programming language. A multi-threaded program contains two or more parts that can run concurrently and each part can handle different task at the same time making optimal use of the available resources specially when your computer has multiple CPUs.</p>
<p>In this post, we will discover how to write effective and efficient multi threaded program in Java.</p>
Autoboxing and Unboxinghttp://hackjutsu.com/2016/01/12/Autoboxing and Unboxing/2016-01-13T02:00:01.000Z2016-01-15T01:47:12.000ZAutoboxing and unboxing is introduced in Java 1.5 to automatically change the primitive type into the wrapper class and vice verse. With this feature, we can use primitives(int, double, float…) and wrapper classes(Integer, Double, Float…) in many places interchangeably.

The following table lists the primitive types and their corresponding wrapper classes, which are used by the Java compiler for autoboxing and unboxing:

Primitive type

Wrapper class

boolean

Boolean

byte

Byte

char

Character

float

Float

int

Integer

long

Long

short

Short

double

Double

Autoboxing and Unboxing Examples

Autoboxing

Autoboxing is the automatic conversion that the Java compiler makes to change primitive types to their corresponding object wrapper classes. Here is an example for autoboxing.

1

2

3

4

List<Integer> li = new ArrayList<>();

for (int i = 1; i < 50; i += 2) {

li.add(i);

}

According to the Java Docs, The Java compiler applies autoboxing when a primitive value is:

Passed as a parameter to a method that expects an object of the corresponding wrapper class.

Assigned to a variable of the corresponding wrapper class.

Unboxing

Unboxing is the opposite process of autoboxing.

1

2

Integer myWrapperInt = 13;

int myPrimitive = myWrapperInt;

The Java compiler applies unboxing when an object of a wrapper class is:

Passed as a parameter to a method that expects a value of the corresponding primitive type.

Assigned to a variable of the corresponding primitive type.

Caveats

Autoboxing and unboxing lets developers write cleaner code, making it easier to read, however there are some caveats we need to understand before utilizing them in production code.

Unnecessary Object creation due to Autoboxing

As shown in the example from Javarevisited, autoboxing could throw away object which gets created if autoboxing occurs in a loop. This could potentially slow down the system with frequent garbage collection.

Complicated method overloading

As discussed in Javarevisited, we need method overloading to distinguish value(int) and value(Integer) since autoboxing/unboxing will potentially introduce subtle bugs if only either method exists.

For example, ArrayList.remove() is overloaded by remove(index) and remove(Object), so that autoboxing/unboxing won’t occur to confuse us by mixing removing object by index and removing object itself (e.g. especially when Integer is the object).

Tricky “==” operator

I would like to borrow the example again from Javarevisited. More details can be found on the original post.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

publicclassAutoboxingTest{

publicstaticvoidmain(String args[]){

// Example 1: == comparison pure primitive – no autoboxing

int i1 = 1;

int i2 = 1;

System.out.println("i1==i2 : " + (i1 == i2)); // true

// Example 2: equality operator mixing object and primitive

Integer num1 = 1; // autoboxing

int num2 = 1;

System.out.println("num1 == num2 : " + (num1 == num2)); // true

// Example 3: special case - arises due to autoboxing in Java

Integer obj1 = 1; // autoboxing will call Integer.valueOf()

Integer obj2 = 1; // same call to Integer.valueOf() will return same

// cached Object

System.out.println("obj1 == obj2 : " + (obj1 == obj2)); // true

// Example 4: equality operator - pure object comparison

Integer one = new Integer(1); // no autoboxing

Integer anotherOne = new Integer(1);

System.out.println(

"one == anotherOne : " + (one == anotherOne)); // false

}

}

Here is the output.

1

2

3

4

i1==i2 : true

num1 == num2 : true

obj1 == obj2 : true

one == anotherOne : false

I would like to put the insightful explanation from the original post here.

In first example both argument of == operator is primitive int type so no autoboxing occurs and since 1==1 it prints true.

While in second example during assignment to num1, autoboxing occurs which converts primitive 1 into Integer(1) and when we compare num1==num2 unboxing occurs and Integer(1) is converted back to 1 by calling Integer.intValue() method and since 1==1 result is true.

In Third example which is a corner case in autoboxing, both Integer object are initialized automatically due to autoboxing and since Integer.valueOf() method is used to convert int to Integer and it caches object ranges from -128 to 127, it returns same object both time. In short obj1 and obj2 are pointing to same object and when we compare two object with == operator it returns true without any autoboxing.

In last example object are explicitly initialized and compared using equality operator , this time == return false because both one and anotherOne reference variables are pointing to different object.

Resource

]]>
<p><img src="http://i.imgur.com/tuNnNbs.png" style="max-height: 280px;"/><br>Autoboxing and unboxing is introduced in Java 1.5 to automatically change the primitive type into the wrapper class and vice verse. With this feature, we can use primitives(<code>int</code>, <code>double</code>, <code>float</code>…) and wrapper classes(<code>Integer</code>, <code>Double</code>, <code>Float</code>…) in many places interchangeably.<br>