Nov 3, 2014

I introduced a new version of picamera and its new feature on last article. This new feature lets us add overlays to the preview. You'll never have to continuously capture image from camera and add some image or text to it before sending to display device as preview; just render preview with show_preview() and add as many layer as you wish to show some statistics, recording status or anything you want.

It was the only way to add modification to preview until picamera ver. 1.8 came out. Because this implementation shows preview by continuously updating image, frame rate can be lower as the image modification process gets slower; Slower the loop process, fewer the image capture, poorer the preview quority.

Comparison

Despite the latency rpi-fbcp gives, the solution with overlay works better for me. I also like this in a point that we can separate preview and overlay implementation.
Now I'll take a break and wait till PiTFT HAT comes out.

Sep 6, 2014

To interact with Raspberry PI Camera Board
from Python, I chose a library called picamera. It provides us interface to control Raspberry Pi Camera without using raspistill, raspivid or raspiyuv utilities. Actually it provides even more features such as taking images and recording video at the same time, modifying options while recording, etc. I think this library is reliable because it is well maintained by its owner, waveform80, and is used in adafruit's sample project. While reading its code, I found it was very important to understand the concepts and basics of Multi-Media Abstraction Layer (MMAL), so I began reading MMAL's specification. I am going to introduce what I have learned about MMAL and how picamera works with it.

MMAL

First of all, Raspberry Pi has BCM2835 that has VideoCore IV inside. As I already mentioned, MMAL stands for Multi-Media Abstraction Layer, and it provides lower level interface to multi-media components running on this VideoCore thing. Actually MMAL runs over OpenMAX so there is another layer below MMAL, but I decided to ignore it. As long as MMAL works fine as an abstraction layer and fulfil my requirement, I do not have to be aware of anything behind it.

Here is how MMAL works.

Client creates MMAL component via API.

When creation is done a context is returned to component.

This context provides at least one input port and one output port. This number and format may vary depending on what this component represents. The format of input port must be set when component is created; The format of output port may be decided later and can be modified by client.

Client and component send buffer headers to each other through input/output port to exchange data. Component can also exchange buffer headers with other connected component.

There are two kinds of buffer headers:

One contains meta data and pointer to actual data.Payload, the actual data to be exchanged, is not stored in buffer header so the memory can be allocated outside of MMAL.

The other one contains event data.

These buffer headers are pooled and each of them gets recycled when it is no longer referenced.

How picamera works with MMAL

The good thing about MMAL is that MMAL components can be connected to each other so they can exchange buffer headers. This well designed library, picamera, creates and uses some different kinds of components effectively to work with Raspberry Pi Camera Board. My favorite thing is that it creates a splitter component that receives video output from camera component and splits this input to 4 output ports. This way we can capture image via video port while video recording is in progress.

Components created on initialization

When picamera is initialized, it creates components below:

Camera component

This is the most important one. Everything starts from here. This component provides 3 output ports:

preview

video

still

The other components I am going to introduce receive data directly or indirectly from these output ports.

Preview component

One of these components' input port is connected to camera component's preview output. On initialization and when actual preview is not needed, null-sink component is connected. It is necessary because if we do not connect any component to the preview output provided by camera component, the camera doesn't measure exposure and captured images gradually fade to black. When start_preview() is called, it creates preview renderer component and replaces null-sink component with this preview component that actually shows preview.

Splitter component

Its input port is connected to the camera component's video output. It has 4 output ports and each output port works as a copy of camera component's video output. This way, we can record data from one of its output ports and capture image via another output port at the same time. Yes, capturing image via video port is faster and you might want to use it (camera.capture('foo.rgb', use_video_port=True)).

Other components

Some other kinds of components including encoder and renderer are created when necessary. Thanks to waveform80, A new version of picamera, ver. 1.8, was released today and it provides interface to create and manage MMAL renderer components. This must be really handy when you want to add overlay to your preview. I, for one, wanted to have this feature so badly that I asked @wafeform80 about release schedule, last night.

@waveform80 Are you planning on releasing picamera ver. 1.8 in near future? I really like how add_overlay and renderer work in latest code.
— Oklahomer's (@oklahomers) September 5, 2014

He was nice enough to respond to me with a sense of humor and he really released it!

Aug 31, 2014

Previously I finished setting up Raspberry Pi, attaching required peripheral devices, testing those devices, installing required software packages and making backup. I think it's a good time to start writing some code for this project. When this coding is done the prototype looks like this.

click to enlarge

Live preview with current recording status and driving speed are displayed on PiTFT. The good thing is that this can record and preview at the same time. I didn't really have to display current speed on it, but I added it anyway because my wife always care about how fast I'm driving. Maybe I could subtract 10 or 20 from this speed to make her comfortable.
I'm going to explain what modules I created and how they work together.

Modules

For this project, I decided to go with modules below:

run.py

Odyssey.py

GPSController.py

PiCamController.py

run.py

This module is basically responsible for 3 things.

It sets environmental variables used for gpsd and PiTFT.

It maps each tactile button and corresponding function.

Initialize Odyssey and start this camcorder application

In this way other modules don't have to deal with GPIO and can concentrate on dealing with their own process and logics. In fact, run.py is the only module that imports RPi.GPIO.

Odyssey.py

This is the key part of this project. On its initialization, it initializes GPSController and PiCamController and stores those instances so it can manage them on user interaction. This provides interface to switch preview, recording and GPS logging. Run.py uses this interface.

GPSController.py

Obviously this handles GPS device. On initialization it starts gpsd so we can receive data from GPS Breakout . It provides access to GPS data and when recording status is active it logs current location and speed data to designated file every five seconds.

PiCamController.py

While GPSController deals with GPS device, this deals with Raspberry PI Camera and PiTFT
. I started with having separate module for camera and PiTFT: PreviewController.py and PiCamController. However camera and preview works closely together( e.g. share the same pythone-picamera instance) so I combined them in one module.

How they work together

GPSController and PiCamController both inherit threading.Thread so they create new threads for each of them. This way these 2 instances don't block each other and Odyssey can still have control over them.

I'm going to explain how each module works on later articles. Making live preview and recording work at the same time and overlaying current status on preview were some tricky. I'll spend some time explaining them, too.

Jul 21, 2014

With previous steps, we installed required hardware/software and finished basic configuration so we can now take photo, shoot video, output data to external touchscreen and fetch location data. Before writing some Python code to let those modules work together, I think we should take some time to make a copy of the disk we worked on. This way, even if you make a huge mistake in the near future and everything is messed up, you can install this saved disk image and start from where we are now. It's just like playing your favorite RPG. You don't wanna start from scratch, right?

Making a copy

Just like when we installed Raspbian OS to empty SD card, we use dd command to make a disk image. You insert your SD card to your iMac and hit `diskutil list` to see which disk we are going to use. Then execute `sudo dd if=/dev/disk1 of=Odyssey-Mk1_20140721.dmg`. You should change 'if(input file)' and 'of(output file)' value to suit your environment and preference. My 8GB SD card took 20-30 minutes to complete. It will be longer if you have SD card with larger space.

Installing

When you want to install the saved disk image, the process is almost the same as when we installed Raspbian to a fresh SD card. First you insert an SD card that you want to install this disk image to. Then unmount this just like the initial installation. Then `sudo dd if=Odyssey-Mk1_20140721.dmg of=/dev/disk1`. Again, you must set 'if ' and 'of' value to suit your environment.
It takes longer than making an OS image. In my case it took me more than 2 hours. You must be really patient.

My trick

If your SD card has larger space and you have chosen to 'Expand Filesystem' on the initial configuration, both steps take longer time to complete. So I make my 8GB SD card to be a master and make a copy of it. This way it doesn't have to copy each and every byte of 64GB. Then I install this to Micro SD card with larger volume. Right after this installation, this Micro SD card only utilizes 8GB of the entire space because I made a copy of 8GB SD card.

So I `sudo raspi-config` again and choose 'Expand Filesystem' to utilize all spaces left.

Jul 20, 2014

For my project, I chose Adafruit's Ultimate GPS Breakout. I searched some GPS modules on the web and Adafruit's one have detailed official guide and many reference article by users. Spec. seemed good, too:

Connecting GPS module

As shown in the second capture, you should connect GPS module's VIN to Pi's 5v, GND to GND, RX to TXD and TX to RXD. Note that TX and RX are cross wired since module's input is Pi's output and vise versa.

Check and debug

You execute `sudo gpsd /dev/ttyAMA0 -F /var/run/gpsd.sock` and now gpsd is running. Check if you can properly read the receiving data by `cgps -s`. If it's working, the output is something like below.

In my case it didn't work at the first place and I had to go through debug process.
To add debug option to gpsd you add -D followed by a number that indicate debug level. e.g. `sudo gpsd /dev/ttyAMA0 -N -D3 -F /var/run/gpsd.sock`. I saw messages like below. It kept saying 'Satellite data no good (1 of 1).'

I couldn't really tell if the wiring connection has a problem or the GPS module is not receiving data. So I checked what's in /dev/ttyAMA0.

The GPS module was set by a window, soldering seemed O.K. and it looked like my GPS module was receiving some partial data. I couldn't really tell what to do so I posted to Adafruit's forum. As shown on the forum topic, they were kind and helped me well. Now my GPS module works perfect and I can receive consistent data.

Recommended

I didn't know GPS module is so sensitive until I faced the problem. Now I have my external antenna attached. You can set this antenna on the roof of your car to receive better data. To do this you'll need:

Jul 19, 2014

Since PiTFT itself is already setup in the last step, now I continue to setup touchscreen. This step is also documented on the official document so you should read though this first.

Basic setup

To start with, we make udev rule for the touchscreen.
Then remove and re-install touchscreen package. Then check if current setup went O.K. Now touchscreen should refer to eventN where N is number.

Calibration

Before calibration process I installed some handy tools that were introduced on the document.

Lastly

I installed mplayer for later preview use.
I downloaded video from and played it with the code below, which is exactly the same as documented.
Then I saw some scary message. I'll have to check this out later. I'm just going to add GPS module and see if it works.

Since we are done installing RaspiCam, now we are going to attach PiTFT and install required software. Before setting up PiTFT
you should read through the complete document because it covers pretty much everything. What we need to do is to solder hardware and install some software packages.

Hardware

When you open the PiTFT package you'll find extra 2x13 IDC socket for additional cobbler cable. PiFTF's female header covers Pi's all 26 GPIO header pins, but it actually uses 3 of them so it seems a good idea to connect cobbler cable to utilize the rest of 23 pins.
Cobbler breakout and cable package includes parts as shown below. You can stick the pins right to your breadboard so its really easy to add and test other modules like GPS and microphone amplifier module.

If you solder IDC socket, don't solder it on top of it. Do it on the reverse side.

And you make sure the white code, pin 1 indicator, goes right next to the female header.

When you solder female header and optional IDC socket, hardware part is done. Set the female header to your Pi's GPIO header pins.

Software

Software part needs a bit more work than RaspiCam. You'll need to get 5 packages as described on the document:

And then use dpkg to install them. The install log is shown below. You need to reboot your Pi to activate this change.

Test

After rebooting your Pi, you need to let your Pi install screen driver. Commands are below. This modprobe command lets you load kernel modules. In this case we load spi-bcm2708 and fbtft_device.

If it goes O.K., the desktop shows up.

From next boot, we want Pi to load those kernel modules automatically so we add some modifications to /etc/modules. The modules listed in /etc/modules are loaded on boot so we don't have to repeat the previous step to load them.
And add new file: /etc/modprobe.d/adafruit.conf. rotate value indicate the rotation angle while frequency tells how fast it should drive the display. If the display acts weird document suggests you to drop this frequency value to 16000000.
Reboot and add some calibration setting.
Add some setting to .profile for later convinience.
Then run `startx` and desktop will show up.

Display STDOUT on PiTFT

If you wish to output standard output to PITFT on boot, make some change to /boot/cmdline.txt like below.

Then hit some commands and PiTFT will display standard output.
When you want to turn it off you simply write 0 to /sys/class/gpio/gpio252/value.

Shooting video is as easy as taking picture. Just execute `raspivid -o video.h264 -t 10000`. -o option decides the output file and if '-' is given, output goes to STDOUT. -t is time in millisecond(s). So the command above will give you a 10-second-video with a name of video.h264. To easily convert this to .mp4 you can install gpac and use MP4Box command.

Connecting camera board

Camera board is as small as 25mm x 20mm. Just stick the end of the cable to your Raspberry Pi and all connection is done.

Taking a picture via CLI

Since we installed VNC server to Raspberry Pi and VNC client to iMac, we can emulate Pi's screen on iMac. So what I do here is to move to Pi's desktop with `cd ~/Desktop` and take a picture with `raspistill -o image.jpg`. Taken photo is saved as image.jpg on desktop and it appears on your VNC screen. Double click on image.jpg. You'll see your photo displayed.

Hitting `raspistill` without any option will show manual of raspistill command. Or you can find detailed document at at 'RaspiCam Documentation' and official document at raspberrypi.org.

With previous steps you installed Raspian, finished up initial configuration and update/upgraded Raspbian. So the minimum setup is all done now. Before connecting Raspicam to take picture and/or shoot video, I think we should now install VNC server and client. With this, you won't have to connect your Raspberry Pi to external display to see the screen output, but you can emulate the screen on your PC.

To display and interact with Raspberry Pi's screen on my iMac, I installed VNC server to my Raspberry Pi and VNC client to my iMac. This VNC client will emulate the screen on client PC so it's easier to debug camera module taking video and pictures.

Server on Raspberry Pi

All steps are covered in 'Take Control of Your Raspberry Pi Using Your Mac, PC, iPad or Phone' and are really straight forward so you should read this once. Basically what you need to do is just do `sudo apt-get install tightvncserver` and the server is ready. You only have to execute `tightvncserver` and type in password to run VNC server.

Client on iMac

Visit download page and install Chicken of the VNC. Then launch it. Default port is 5901, host IP is the one that you use for ssh connection and the password is the one you just set. You type them in and it's done. Remember not to check 'view only' or you won't be able to click on anything on the desktop on emulated display.

Since system configuration is done and SSH AcceptEnv problem is solved, I think it's a good time to update/upgrade Raspbian. It doesn't matter if you installed the latest version of Raspbian. Even if you do, you should still update/upgrade before going any further. Actually if you skip this step, camera module may not work. It's pretty easy as doing `sudo apt-get update` and `sudo apt-get upgrade`. Why not do it?

While I went through raspi-config in the previous step, I saw a lot of warning messages like below.
To fix this simply edit /etc/ssh/sshd_config and comment out "AcceptEnv LANG LC_*". My iMac is sending some language settings, but this modification will let my Raspberry Pi ignore it.

After successfully installing OS to your SD card, now it's time to stick this to your Raspberry Pi and do the initial configuration. Connect LAN cable to Raspberry Pi, insert the SD card to it and connect power cable. Raspberry Pi starts up and several seconds later it can be accessed with ssh. IP address should be found with iNet or some other equivalent app/method. Initial username and password are set to pi and raspberry.

raspi-config

Do `sudo raspi-config` and start configuration. When you hit `sudo raspi-config` you will see a console like below.

1. Expand Filesystem

Usually you wanna expand filesystem. The OS distribution images are normally around the size of 2GB and when your SD card has unused portion, it will expand to fill the rest of the SD card so you can use more space. If you can't find any particular reason not to do this, you should do this. You must reboot before enabling this modification.

2. Change User Password

It changes password for pi user. As I wrote above, the initial password is set to raspberry. You should change this.

3. Enable Boot to Desktop/Scratch

I left it with default value.

4. Internationallization Options

Locale

en_GB.UTF-8 is sellected as default so I unchecked this and checked en_US.UTF-8. You hit space key to switch. Then hit Tab and choose OK.

Timezone

Since I visit a lot of places including Japan, the U.S., Australia and other countries for road trip, it's kind of pointless to set a particular timezone. And my GPS module gives me UTC time so I chose 'None of the above > UTC', anyway.

Change keyboard layout

I used U.S. keyboard so I didn't have to do anything about it.

5. Enable Camera

Default is 'Disabled', but this project is going to use camera for sure so I enabled it.

6. Add to Rastrack

It has nothing to do with my project, but it's fun. You might wanna do it when you have internet connect now.

7. Overclock

No.

8. Advanced Options

Some advanced settings. For this project, make sure SSH and SPI are enabled.

It's done. When you choose to finish, it asks you if it should reboot now. To activate changes you should let it reboot. After reboot, it's all activated. During this configuration process, you might have seen some warning messages like below.
If so, you can change some setting with the next article. If you didn't have trouble you can ignore the next one and go on to update/upgrade Raspbian.

On my previous article I described why I'm working this project and what this project goal is. This article covers how to install OS and get your Raspberry Pi ready to boot.

To install OS, you need to get an SD card or Micro SD card with adaptor. You are going to install your choice of OS image to it and stick this SD card to Raspberry Pi. Some starter kits offer Raspberry Pi with OS installed SD card. It's easy to get started with one of those, but as you work on your project you'll want to make a copy of your OS image before making a huge modification. You don't wanna mess up your entire work and start all over again, right? So sooner or later you'll have to know how to install OS on your SD card and make copies of it. I think it is nice to learn it at first place.

1. Download OS Image

Visit official download page and download the latest version of Raspbian. There are some other distributions listed on that page, but I think choosing Raspbian is the easiest for beginners since we can find many reference articles on the web. I downloaded the .zip archived file. We are going to unzip and install this OS image to the SD card.

2. Unmount SD Card

Insert your SD card to iMac and launch terminal. Type `df -h` and see what appears. When I did it SD card wasn't recognized so I launched disk utility and formatted this card. Then hit `df -h` and the file systems show up like below. Check which to unmount and do `sudo diskutil unmount /dev/********`. '********' should be replaced depending on your environment. You should remember what this device name was since you are going to use it later.

3. Flash disk image

After unmounting your SD card on the previous step, it won't appear on `df -h` again. You unzip your downloaded OS image and then flash this to the unmounted SD card with dd command. On dd command you must be careful what you give to 'of' because it is a bit different from what you remembered on the previous step. You need to omit the last 's1' and replace "disk" with "rdisk". e.g. '/dev/disk1s1' becomes '/dev/rdisk1'. This process may take a while and this doesn't give us visual feedback to indicate progress so you must be patient.

Every year my wife and I go on a road trip with no detailed schedule or motel reservation; Just book our flight and rental car and the rest is up to us. Last time we started from L.A. airport, stayed one night at Las Vegas and won $15 at casino, stopped by some national parks, visited our college in Oklahoma, saw rodeo in Fort Worth and arrived at Dallas/Fort Worth airport.

We usually drive 2000 to 3000 miles in a week, which means we must spend most of the time in a mid-class rental car. We don't have time to pull over every time when my wife finds some fancy view. So, before the last trip in June, I came up with the idea of having a drive recorder record everything. If it works, we can concentrate on what we see and don't have to do something nonsense like looking into a tiny camera finder right in front of a great wilderness passing by.

It, however, didn't work well. I bought a cheap drive recorder with GPS module to start with, but it broke in 2 days. It was a shock. A tool that must have assisted my annual trip just became a piece of junk in a couple of days. This night at Anasazi Inn, my second idea came up: making a drive recorder myself.

After this trip I searched on the web and found out that Raspberry Pi and some handy peripherals can make cheap drive recorder with GPS logger. Raspberry Pi itself and peripherals are less expensive and easily obtainable so even when something breaks I can fix it myself, which I think is a great advantage after experiencing that terrible breakage.

I named my project Odyssey. I thought it would make perfect sense thinking about our unplanned annual trip. My favorite part is that Odyssey is attributed to Homer while my name is Oklahomer and people call me Homer. Coincidence? I don't think so.

May 5, 2014

Today I tried `plenv install-cpanm` and it failed. I do remember it worked fine a couple of month ago and the only thing I can think of was that I did `yum update openssl` last month. I'm pretty sure I didn't do anything else nasty.

I searched and found someone with same problem. I followed what he did as shown below:
Then I succeeded to run plenv install-cpanm

May 3, 2014

On 2014-04-30 Facebook announced API versioning to improve its stability. Each version is guaranteed to work 2 years so we do not have to be afraid of 90 day breaking change policy anymore. The version we were using prior to this introduction is called version 1.0 while the new version introduced along with API versioning policy is called version 2.0.

Versioned Request

Making versioned request is pretty easy. Just like other major platform APIs, you prepend version number to the end point like below:

/v2.0/me/friends

Unversioned request is also supported. Just don't prepend version number and it's considered "unversioned." The document says "An unversioned call will default to the oldest available version of the API," but here is one tricky part:

For apps that existed before April 30th 2014, making an API call without specifying a version number ('unversioned') is equivalent to making a call to the v1.0 of the API.
For apps created on or after April 30th 2014, making an API call without a specifing a version number is equivalent to making a call to v2.0 of the API.

So when you are making unversioned request, just be careful when the request app was created.

Facebook::OpenGraph 1.21

I implemented versioning support to Facebook::OpenGraph and launched it as Ver. 1.21. This change is backward compatible. If you update module and don't make any change to your code it makes unversioned request, which is exactly the same as previous version of this module.

API Versioning

With this version, you can specify default version on initialization as follows:

If you don't specify any version, version is left undef and each request becomes unversioned request.
You can still set version on each request by specifying versioned path such as /v1.0/me/friends. This will override the version configuration given on initialization.

my $fb = Facebook::OpenGraph->new(+{version => 'v2.0'});
$fb->get('/zuck'); # Use version 2.0 by accessing /v2.0/zuck
$fb->get('/v1.0/zuck'); # Ignore the default version and use version 1.0
my $fb = Facebook::OpenGraph->new();
$fb->get('/zuck'); # Unversioned API access since version is not specified
# on initialisation or each reqeust.

Apr 27, 2014

Work with JS SDK

Among 2 newly added features, one deals with cookie that is set by JS SDK. Detailed implementation process can be seen at github issue. $fb->js_cookie_name will give you the name of the cookie and $fb->get_user_token_by_cookie( $cookie_value ) will hit auth endpoint and exchange cookie value to user access token. Sample usage is shown as below:

Access Response Header

The other feature deals with response header. $res->header( $header_field_name ) provides access to any given header field. As a shorthand $res->etag is provided since $fb->fetch_with_etag supports access with etag value.
Other than ETag header field there are some fields that you should know: X-FB-Rev and X-FB-Debug. As described in official blog entry, X-FB-Rev indicate Facebook Platform's internal revision number. Official document is not provided for X-FB-Debug, but it must be something that help resolving their issue. So when you get error response even though your request seems legitimate, you might want to check $res->header(''x-fb-rev) and $res->header('x-fb-debug') and report them on bug report.

Back in 2009-2010, those were totally outdated and we had to exchange information on official wiki. Even in 2013 they did not provide documentation about appsecret_proof thing so we had to look into official PHP SDK code to find out its implementation and translate it into other language of your choice. It is really frustrating when you have to implement something yourself, but you cannot tell if your implementation satisfy its specification.

Now, however, official documents are getting enhanced day by day. They state more detailed specs and provide more step-by-step sample codes. So what I did is adding more links to corresponding part of documents and more quotes in a form of comments and POD. Quoted comments are directly copied from official documentation and are appended to each corresponding logical part. This way you or future maintainer can see the spec and implementation at once. When this module does not work as expected, compare these comments and implementation to the linked document. If there is any difference the document and spec is updated, but this module is not. The document does not provide updated date anymore so I think this is the easiest way to catch up the spec change.

Mar 21, 2014

This article is basically a follow-up of my previous article: Teng::Row and data2row.
As of Teng Ver. 0.21, data2row is implemented as Teng#new_row_from_hash. I already introduced a way how I used this method on my previous article, so I am going to introduce 2 different usages.

Examples are as shown on bottom.

The first example describes how to cache column values and then creates table row object with cached values. With Devel::Size I compared the size of row object and column values then I realized that the size of row objects become much bigger. So my idea is to cache only the column values to minimize cache size and then create table row object with the cached values.

The Second example shows how I utilize the table row object's method by creating temporary object. In this case I'm trying to upload image to a certain path and the path is generated based on the filename and campaign_type. I didn't want to generate path in 2 places, but instead I wanted to generate it at one place so I'm not going to mess up by modifying one and forgetting modifying the other.

After retrieving this, I have to sort the result in result.foo1, result.foo2, ..., result.foo100 order. I'm wondering why they don't return this in array, but I must face it as long as the provider is returning values in this way. Although Perl Best Practices insists on using Sort::Maker, I implemented this in my way because this sort wasn't that complicated.

Today, I benchmarked each sort type provided by Sort::Maker and found that using Sort::Maker was much faster. The code is as below. It was a bit surprising that my original method with orcish maneuver is13% slower than Sort::Maker's one with orcish maneuver. Needless to say, other sort types are much faster. So my conclusion is that even relatively simple sort should be implemented with Sort::Maker to increase readability, maintenancibility and performance.

Feb 9, 2014

The other day I talked about Lingua::PigLatin and how it works. This module really helped me understand how Pig Latin works, but this module only handled English-to-PigLatin translation. So I created Lingua::PigLatin::Bidirectional. As the name implies, this translates English sentences to Pig Latin, and vice versa.

Feb 2, 2014

Soba eating experience is totally different from that of spaghetti. It's not just a difference between forks and chopsticks, but it involves the difference of table manners so understanding and mastering soba eating can be an indicator how much he or she understands Japanese culture. Then how should I tell westerners, who grew up with a manner of not making noise while eating pasta, how to eat soba?
Lately I found a good article, "The sound makes the experience," which approaches this problem from both cultural and technical aspects. It describes the reason to make noise as a very polite gesture to acknowledge cook how much you enjoy your meal. And then it describes the technique as follows:

For me, the best way to conceive of the proper slurping technique is like when you are eating a very hot slice of pizza. You take a small bite, and because it's hot, you start to suck in air while chewing. This allows you to eat the food while it is still very hot, while you are breathing in. This is the same manner in which you should eat soba in Japan.

This helped me a lot. For Japanese like me, it is very difficult to teach this kind of basic skills because we all acquire them during our childhood and do not remember how we learned. Even worse, We can't understand why they can't do it.
This leads me to a conclusion that Japanized foreigners are better teachers than Japanese ourselves. I'll catch up those foreign media to learn how I should teach Japanese culture.

When I say 'I hate Glocks' to my friends, I'm not just talking about its plastic frame and Good Ol’ ‘merica. I'm at age of 28, and I think I'm not old enough to say that. I'm not even American, after all. Actually I know Glocks have some cool features and that's why I carefully say 'Glocks are not for everyone.' I'm going to explain why I think that way.

Premise: Its Uniqueness

Glock's company history is briefly introduced on a web page, Timeline | GLOCK USA. It started its activity as a plastic and steel parts manufacturer and then, in 1970s, shifted its field to military industry including knives. When this company started gun manufacturing after 1980, it brought its invention of nylon-based polymer, Polymer 2, to gun industry. That's Glock 17. So its origin and design policy are totally different from those of other gun manufacturers with long history such as Colt, Winchester and Smith & Wesson. This critical difference includes weight balance and safety mechanism.

Weight Balance

When I lived in Oklahoma -- BTW, that's why I call myself Oklahomer -- I visited H&H Shooting Sports every other week for sport shooting. My favorite choice of full-size handgun was Sig Sauer P226 and second was Colt M1911. Comparing to those 2 handguns, Glocks are extremely light. The weight of cartridges in magazine don't differ a lot so the lightness of front half stands out. It makes difficult to handle the recoil and aim of the second shot.
I know lightness is important for those who must carry gun with them on daily basis such as police officers, but it doesn't profit me.

Its Mechanical Simplicity and Handling Complicity

Glock's safety mechanism, which is called Safety Action system, is pretty simple and unique. It's all about trigger and it doesn't include anything like M1911's manual and grip safety or P226's decocking lever, which I think is the biggest difference concerning safety.
I understand this Safety Action system is a reliable mechanism, but it's just a mechanism. *WE*, humans, make mistakes. To avoid misfire, I believe we need decocking lever or at least cocking indicator. Of course it should be O.K. as long as the shooter, such as law enforcer, handles only Glocks and has enough time for continual training. For others, like sport shooters, who handles various guns and can't afford to train daily basis, I think they should choose guns with more common safety mechanism that involves cocking indicator and decocking lever.

Conclusion

As I described above, Glocks are very unique in terms of its lightness and safety mechanism. I believe this can benefit law enforcers with adequate training and need of portability, but can disbenefit ocasional sport shooters.
By the way I love Gunny from Full Metal Jacket, lol.

Jan 21, 2014

Every once in a while I hear them talk Pig Latin in movies. I repeat it so many times in my mind and try to understand the original line, but it takes time because English is my second language and I'm not used to its transition. So I came up with this idea: go lookup wikipedia and related linguistics articles and create a Perl module that pig-latinize given sentence(s). That should help me understand the rule.
Before launching vim, I searched CPAN for similar module and it didn't take long before I found what I wanted. Lingua::PigLatin converts given sentence to Pig Latin with simple regular expression below.

s/\b(qu|[cgpstw]h # First syllable, including digraphs
|[^\W0-9_aeiou]) # Unless it begins with a vowel or number
?([a-z]+)/ # Store the rest of the word in a variable
$1?"$2$1ay" # move the first syllable and add -ay
:"$2way" # unless it should get -way instead
/iegx;

Since my goal was to understand this game's rule through coding, I read what this regular expression did. I'm not a regular expression expert and it was a bit difficult to understand at once so, with a help of Perl Best Practice, I modified its coding style to increase my readability as below.

Now things became clearer. Here is what it does.
First, it checks every words' beginning by having \b at the very beginning of this expression. It checks if it starts with or without any of the 3 rules below:

starts with "qu"

starts with digraphs such as ch, gh, ph, sh, th and wh to capture words like channel, shell and what

starts with any word character other than 0-9, _ and vowels(AEIOU)

Second, it checks if the following characters are all alphabet.
If both first and second steps apply, the first part is appended at the bottom of the word with -ay; If only second step applies, it just put -way at the end; If both don't apply, it does nothing with the word.

O.K. now I understand how Pig Latin works. But I have a new question. Do Americans really do this in their mind while they are just talking those things that randomly come to their mind? Can't believe it...