Recommended

2016-11-08

This blog post explains how to fix Python SSL errors when downloading web pages using the https:// protocol in Python (e.g. by using the urllib, urllib2, httplib or requests. This blog post has been written because many other online sources haven't given direct and useful advice on how to fix the errors below.

How to fix SSL23_GET_SERVER_HELLO unknown protocol

This error looks like (possibly with a line number different from 504):

If you are using Python 2, upgrade to at least 2.7.7. It's recommended to upgrade to the latest (currently 2.7.11) though, or at least 2.7.9 (which has backported the ssl module (including the ssl.SSLContext customizations from 3.4). I have tested that the error above disappears when upgrading from 2.7.6 to 2.7.7. If you can't easily upgrade the Python 2 on the target system, you may want to try StaticPython on Linux (the stacklessco2.7-static and stacklessxx2.7-static binaries have OpenSSL and recent enough Python) or PyRun on Linux, macOS, FreeBSD and other Unix systems.

If you are using Python 3, upgrade to at least 3.4.3. It's recommended to upgrade to the latest (3.5.2) though.

If you are unable to upgrade from Python 2.x, try this workaround, it works in some cases (e.g. on Ubuntu 10.04) and on some websites:

There is a similar workaround for ssl.sslwrap_simple which also affects socket.ssl.

If you are unable to upgrade from Python 2.6.x, 3.2.x or 3.3.x, use backports.ssl.

If you are unable to upgrade from Python 1.x — 2.5 or 3.0.x &mdash 3.1.x, then probably there is no easy fix for you.

Typically it's not necessary to upgrade your OpenSSL library just to fix this error, ancient versions such as OpenSSL 0.9.8k (released on 2009-03-25) also work if Python is upgraded. The latest release from the 0.9.8 series (currently 0.9.8zh) or from the 1.0 series or from the 1.1 series should all work. But if you have an easy option to upgrade, then upgrade to at least the latest LTS (long-term-support) version (currently 1.0.2j).

How to fix SSL CERTIFICATE_VERIFY_FAILED

This error looks like (possibly with a line number different from 509):

Server certificate verification by default has been introduced to Python recently (in 2.7.9). This protects against man-in-the-middle attacks, and it makes the client sure that the server is indeed who it claims to be.

As a quick (and insecure) fix, you can turn certificate verification off, by at least one of these:

Set the PYTHONHTTPSVERIFY environment variable to 0 before the ssl module is loaded, e.g. run export PYTHONHTTPSVERIFY=0 before you start the Python script.

(alternatively) Add this to your code before doing the https:// request (it affects all requests from now it):

The proper, secure fix though is to install the latest root certificates to your computer to a directory where the OpenSSL library used by Python finds it. Your operating system may be able to do it conveniently for you, for example on Ubuntu 14.04, running this usually fixes it sudo apt-get update && sudo apt-get install ca-certificates).

2016-06-20

This blog post explains how to back up photos on Google Photos and Google Drive as is, keeping the original images files, bit-by-bit identical, without any scaling or reencoding.

TL;DR If you want to keep the original image files, upload the photos to Google Drive, which keeps the original files (bit-by-bit identical as uploaded), and their size counts against your Google storage quota (see your usage). Don't upload any image file to Google Photos.

TL;DR If you want unlimited image uploads with the option of downloading the original image file (bit-by-bit identical), consider options other than Google Photos (such as Flickr and Deviantart).

On Google Photos you can upload some images for free (i.e. those images don't count against your Google storage quota). This is the most important advantage for uploading to Google Photos (rather than Google Drive). But there are some important caveats:

You need to decide before uploading if you want free (it's called high quality) or not (original). Select it in the Google Photos settings. This setting won't effect photos uploaded from your Android devices by the photo backup app.

If you decide non-free (original), future uploads will be counted against your quota, no matter the size, the file format or the quality. That is, even small, low-quality JPEGs will count against your quota.

If you choose free and you upload a PNG file of at most 16 megapixels, the original file is kept, and you'd be able to download it later.

If you choose free and you upload a PNG file of more than 16 megapixels, then it will be scaled down and reencoded.

If you choose free and you upload a JPEG file, then the photo gets scaled down to 16 megapixels (no change if already small enough), and then reencoded with a quality loss (which is small enough so that most humans don't notice), removes or rearranges some metadata (e.g. EXIF), and only the scaled and reencoded JPEG file is available for download.

Google Photos does deduplication of your images. This has an unintended consequence. If you upload a photo 3 times to 3 different album, and you move the photo to the trash, it will be removed from all 3 albums. There is no way to move some photos in an album to the trash without affecting other albums.

Google Photos does deduplication even across qualities. Thus if you upload an image as original first, and the upload it again as high quality, the high quality version will be ignored, and the original version will be present in both albums. It's also true the other way round: if you upload high quality first, then subsequent original uploads of the same image will be ignored.

Even with non-free (original), Google doesn't remember the original file name, as uploaded: it converts e.g. the .JPG extension to .jpg (lower case).

Immediately after the upload, the image info page shows incorrect information, and the Download link serves an incorrect (lower resolution) version of the image. For example, when I uploaded a new 1.1 MB JPEG file in high quality mode, the image info was showing 1.1 MB, but when downloading it, it became a 340 kB JPEG file. After reloading the image page, the image info was showing 550 KB, and downloading it yielded a file of that size. This makes experimenting with image upload sizes confusing.

This is as of 2016-06-20, the behavior of Google Photos may change in the future.

Because of these caveats and unexpected behavior, to avoid quality loss, my recommendation is not to use Google Photos for backing up JPEG image files.

2016-05-30

This blog post explains how to install Hungarian spell checking and hyphenation for LibreOffice and OpenOffice on Linux. The instructions were tested on Ubuntu Trusty, but they should work well on other Linux distributions as well with small modifications.

The installation consists of downloading the right files, copying them to the right location, and restarting the LibreOffice and/or OpenOffice.

Install LibreOffice or OpenOffice with your favorite package manager if not already installed.

Download the files from http://magyarispell.sf.net/ (no need to click now) by running this commands (without the leading $) in a terminal window:

Start LibreOffice or OpenOffice, type asztall, select it, change the language to Hungarian in Format / Character. Now the text should be underlined with read, and when you right-click, the suggestion asztal should be offered.

2016-04-06

This blog post explains how to disable (reject) any root password on Debian and Ubuntu, thus rejecting login attempts as root. Becoming root with sudo (by typing the calling user's password) or ssh (using a public key) remains possible.

TL;DR Run as root: passwd -d -l root

How to become root if password-based root logins are (or will be) disabled?

Before disabling password-based root logins, make sure you have other ways to become root. One possible way is running sudo (without arguments) from a non-root user. To make this work, first you have to install sudo by running (without the leading #) as root:

# apt-get install sudo

as root. (Ubuntu systems come with sudo preinstalled, Debian systems don't have it by default.) Then run as root, replacing MyUser with your non-root login name:

# adduser MyUser sudo

After running this, running sudo as that user will ask for the user's password (not the root password), and when typed correctly, you will get a root shell, and will be able to run commands as root. (Type exit to exit from the root shell.)

An alternative to sudo for becoming root without a password is running ssh root@localhost. For that you need a properly configured sshd (with PermitRootLogin without-password or PermitRootLogin yes in /etc/ssh/sshd_config), creating an SSH key pair and appending the public key to /root/.ssh/authorized_keys. If you need help setting this up or using it, then please ask a Unix or Linux guru friend.

How to disable password-based root logins

To disable (reject) any root password on Debian and Ubuntu, run this (without the leading #) as root:

# passwd -d -l root

This effectively changes the 2nd field line starting with root: in /etc/shadow to !, thus the line will start with root:!:, making login, su, ssh (when using password authentication, i.e. no public key) reject login attempts as root. Typically the password wouldn't even be asked for, but if it is, any password would be rejected. An alternative to the command above is editing the /etc/shadow file manually (as root), and adding the !. Also the -d flag is not necessary, without it the password hash is still kept in /etc/shadow (but a ! is prepended to disable it).

Ubuntu comes with this default (root:!: in /etc/shadow), Debian doesn't.

If you want to disable the root password in ssh only (and allow password-based root logins in login and su), then instead of running the command above, add (or change) the line

PermitRootLogin without-password

to /etc/ssh/sshd_config (as root), and then run (as root):

# /etc/init.d/ssh restart

Please note that there are ways to permit a root login without a password (or with an empty password), but this is very bad security practice, so this blog post doesn't explain how to do it.

This blog post is to announce statically linked binaries for Linux i386 of MicroPython.

MicroPython (Python for microcontrollers) is an open source reimplementation (see sources on GitHub) of the Python 3.4 language for microcontrollers with very little RAM (as low as 60 kB). The CPython interpreter is not used at all, MicroPython has a completely separate implementation in C, supporting the full Python 3.4 language syntax, but with a much smaller standard library (i.e. much fewer modules and classes, and existing classes have fewer methods). Unicode strings (i.e. the str class) are supported though.

MicroPython can be cross-compiled to many different platforms, including multiple microcontrollers (including the ESP8266 ($5) and the pyboard ($40)) and to Unix systems (including Linux). The micropython binary seems to be 17.56 times smaller than the python binary for Linux i386 (both binaries were statically linked against uClibc using https://github.com/pts/pts-clang-xstatic/blob/master/README.pts-xstatic.txt, and optionally compressed with UPX). The detailed file sizes are:

Please note that neither StaticPython nor MicroPython open any external files (such as .so or .py or .zip) when starting up, all the Python interpreter and the Python standard library (and the libc as well) are statically linked in to the binary executable.

2016-01-06

This blog post explains how to extract comments from a JPEG file. Each JPEG file consists of segments. Each segment describes parts of the image data or metadata. The comments are in segments with marker COM (0xfe), there can be any number of them, anywhere (usually before the SOS segment) in the file.

Use the rdjpgcom command-line tool to extract comments. The tool is part of libjpeg, and on Ubuntu and Debian systems it can be installed with (don't type the leading $):

$ sudo apt-get install libjpeg-progs

Once installed, use it like this to print all comments in the JPEG file, with a terminating newline added to each:

$ rdjpgcom file.jpg

If the file doesn't have any comment, the output of rdjpgcom is empty. Here is how to add comments:

2015-12-05

This blog post is an announcement of flickrurlget, a command-line tool for Unix, written in Python 2.x that can be used to download photos from Flickr in batch. flickrurlget itself doesn't download photos, but it generates a list of raw photo URLs which can be downloaded with a download manager (even with `wget -i').

2015-11-27

This blog post explains how to compute the sorted intersection of two sorted lists, and it shows a fast Python implementation. The time complexity is O(min(n + m, n · log(m)), where n is the minumum of the list lengths, and m is the maximum of the list lengths.

The first idea is to take the first element of both lists, and, if they are different, discard the smaller one of the two. (The smaller one can't be in the intersection because it's smaller than all elements of the other list.) If the first elements were equal, then emit the value as part of the intersection, and discard both first elements. Continue this until one of the lists run out. If discarding is implemented by incrementing index variables, this method finishes in O(n + m) time, because each iteration discards at least one element.

We can improve on the first idea by noticing that it's possible to discard multiple elements in the beginning (i.e. when the two first elements are different): it's possible to discard all elements which are smaller than the larger one of the two first elements. Depending on the input, this can be a lot of elements, for example if all elements of the first list are smaller than all elements of the second list, then it will discard the entire first list in one step. In the general case, binary search can be used figure out how many elements to discard. However, binary search takes logarithmic time, so the total execution time is O(m · log(m)), which can be faster or slower than the O(n + m) of the previous solution. In fact, by more careful analysis of the number of runs (blocks which are discarded), it's possible to show that the execution time is just O(n · log(m)), but that can still be slower than the previous solution.

It's possible to combine the two solutions: estimate the execution time of the 2nd solution, and if the estimate is smaller than n + m, execute the 2nd solution, otherwise execute the first solution. Please note that the estimate also takes into account the constants (not only big-O). The resulting Python (≥2.4 and 3.x) code looks like:

defintersect_sorted(a1,a2):"""Yields the intersection of sorted lists a1 and a2, without deduplication. Execution time is O(min(lo + hi, lo * log(hi))), where lo == min(len(a1), len(a2)) and hi == max(len(a1), len(a2)). It can be faster depending on the data. """importbisect,maths1,s2=len(a1),len(a2)i1=i2=0ifs1ands1+s2>min(s1,s2)*math.log(max(s1,s2))*1.4426950408889634:bi=bisect.bisect_leftwhilei1<s1andi2<s2:v1,v2=a1[i1],a2[i2]ifv1==v2:yieldv1i1+=1i2+=1elifv1<v2:i1=bi(a1,v2,i1)else:i2=bi(a2,v1,i2)else:# The linear solution is faster.whilei1<s1andi2<s2:v1,v2=a1[i1],a2[i2]ifv1==v2:yieldv1i1+=1i2+=1elifv1<v2:i1+=1else:i2+=1

The numeric constant 1.4426950408889634 in the code above is 1/math.log(2).

The code with some tests and with support for merging multiple sequences is available on GitHub here.

2015-03-20

This blog post explains how to copy files using rsync between the computer running Unix (typically Linux or Mac OS X) and the mobile device running Android, using an USB cable. It's not necessary to root the device. It's not necessary to install any app.

If you want to copy over wifi rather than USB, then please use the app rsync backup for Android (rsync4android) instead. The rest of this tutorial describes a method which needs the computer and the mobile device connected with a USB data cable.

Enable USB debugging on the device. If you don't know how to do it in Settings, then find a tutorial online.

If it doesn't work, you may have to enable Settings / Developer options / USB debugging on the device, then reconnect, then click OK on the dialog box in the device, then rerun adb shell id on the computer.

Please note that on Cyanogenmod rsync is installed by default and it is on the $PATH, so you can skip some of the steps below. (If you don't know what to skip, just do everything anyway.)

Make the rsync binary on the device executable: adb shell chmod 755 /data/local/tmp/rsync

Make sure you have a backup copy of the binary in a more permanent directory: adb shell cp /data/local/tmp/rsync /sdcard/rsync.bin

Get the rsync version by running adb shell /data/local/tmp/rsync --version . Typical output:

$ adb shell /data/local/tmp/rsync --version
rsync version 3.1.1 protocol version 31
Copyright (C) 1996-2014 by Andrew Tridgell, Wayne Davison, and others.
Web site: http://rsync.samba.org/
Capabilities:
64-bit files, 64-bit inums, 32-bit timestamps, 64-bit long ints,
no socketpairs, hardlinks, symlinks, no IPv6, batchfiles, inplace,
append, no ACLs, no xattrs, no iconv, no symtimes, no prealloc
rsync comes with ABSOLUTELY NO WARRANTY. This is free software, and you
are welcome to redistribute it under certain conditions. See the GNU
General Public Licence for details.

Run rsync --version . If you get something like

rsync: command not found

, then rsync isn't installed to the computer. On Ubuntu you can install it with sudo apt-get install rsync

Start the rsync daemon in the device by running: adb shell /data/local/tmp/rsync --daemon --no-detach --config=/sdcard/rsyncd.conf --log-file=/proc/self/fd/2 . It must start up with just a single message:

Keep it running for the duration of the copies (below), and continue working in another terminal window. Or press Ctrl-C to exit right now, and restart it (so it will start running in the background on the device) like this: adb shell '/data/local/tmp/rsync --daemon --config=/sdcard/rsyncd.conf --log-file=/data/local/tmp/foo &'

Start port forwarding by running: adb forward tcp:6010 tcp:1873

Now you can start copying files with rsync (back and forth). An example command:
rsync -av --progress --stats rsync://localhost:6010/root/sdcard/Ringtones .

You may find the --size-only flag useful if rsync is copying the same files over and over again.

You may want to copy from or to /storage/sdcard1 instead of /sdcard on the device.

Some newer storage devices have the exfat filesystem (older ones typically have fat or some emulation of it, and that's just fine). Writing to exfat drives rsync -av crazy: it reports steady progress with lots of Operation not permitted errors, but it actually doesn't create any files. This applies both rsync running on the Linux computer and rsync running on the device. A solution is to replace rsync -av with rsync -vrtlD , and restart the copy.

2015-03-03

This blog post explains how to avoid data copies in assignment from temporary values in C++. The move assignment operator (a feature introduced in C++11) will be defined for the class, and it will get called instead of the copy assignment operator, and the copy will be avoided.

Let's consider std::string, a type whose values are expensive to copy (assuming that the implementation copies the entire string data, not just a pointer to t buffer). Both the copy constructor and the copy assignment operator (operator=) copy the old data from the new data, like this for the copy assignment operator:

Let's assume that we have a function which returns a string: std::string GetUserName();. We can call this function and save the result to a variable: std::string user_name = GetUserName();. (It also works the same way with const in the beginning.) How many times does the value have to be copied until it lands in the variable user_name? Most modern compilers do the return value optimization to avoid all copies (so no copy constructor and no copy assignment operator is run). But if we already have the variable std::string user_name; and we want the assignment user_name = GetUserName(); avoid copies, then we need to define a move assignment operator (taking an rvalue reference (&&) argument instead of a const reference (const&) argument), and the assignment above will use the move assignment operator, which is faster than the copy assignment operator, because it can steal the resources from the source. An example implementation:

#include <string.h>namespacestd{string&operator=(string&&other){capacity_=other.capacity_;size_=other.size_;data_=other.data_;// Copies just the pointer.other.capacity_=other.size_=0;other.data_=nullptr;return*this;}}

There is also a corresponding move constructor which can be called instead of the copy constructor to avoid the copy. It works even if the return value optimization cannot be applied (e.g. when the function body has both return a; and return b;).

The only difference is =C has changed to #C when C++11 features were enabled. That's because the body of the #if in the code gets compiled only for C++11 and above, and this body contains the move assignment operator. If there is a move assignment operator (e.g. in our C++11 version), then the line cb = C10(33); will use it, otherwise (e.g. in our C++98 version) that line will use the copy assignment operator.

Where do the actual data copies occur? In the copy assignment operator (#C) and in the copy constructor (*C, not called at all in the example). By defining a move assignment operator in C++11, we can prevent the copy assignment operator from getting called, thus we can avoid a copy when assigning from a temporary (rvalue).

Please note that the return value optimization avoids the copy in the C ca = C10(11); statement. This works even in C++98, without the move constructor.

In C++98, a copy can be avoided by using swap at the call site, for example if the caller replaces user_name = GetUserName(); with { std::string tmp = GetUserName(); user_name.swap(tmp); }, then the copy will be avoided: the definition of tmp takes advantage of the return value optimization, and swap swaps only the pointers and the sizes, not the actual data.

2015-02-12

This blog post shows a Java class to generate all subsets of size k of the set {0, 1, ..., n - 1}, in lexicographic order. The code uses O(k) memory, and it doesn't store multiple subsets at the same time in memory. This is achieved by implementing the Iterable interface.

importjava.util.Iterator;importjava.util.NoSuchElementException;/** * Generate all subsets of {0, ..., n-1} of size k, in lexicographically * increasing order. */publicclassFixsubimplementsIterator<int[]>,Iterable<int[]>{privateintn;privateintk;privateint[]a;publicFixsub(intn,intk){assert(k>=0);assert(n>=k);this.n=n;this.k=k;}privateintfindIdxToIncrease(){inti;for(i=k-1;i>=0&&a[i]==n-k+i;--i){}returni;}@OverridepublicIterator<int[]>iterator(){returnthis;}/** * Always returns the same array reference, the caller is responsible for * making a copy. The caller shouldn't modify the array elements. */@Overridepublicint[]next(){if(a==null){a=newint[k];for(inti=0;i<k;++i){a[i]=i;}}else{inti=findIdxToIncrease();if(i<0)thrownewNoSuchElementException();for(++a[i++];i<k;++i){a[i]=a[i-1]+1;}}returna;}@OverridepublicbooleanhasNext(){returna==null||findIdxToIncrease()>=0;}@Overridepublicvoidremove(){thrownewUnsupportedOperationException();}publicstaticvoidmain(String[]args){for(int[]p:newFixsub(7,4)){StringBuildersb=newStringBuilder();for(inti=0;i<p.length;++i){if(i!=0)sb.append(',');sb.append(p[i]);}sb.append('\n');System.out.print(sb);}}}

2014-10-26

This blog post explains how to download some Gmail messages (distinguished by a label) to a computer running in Unix. The download method shown works in headless mode, so it can be run from cron etc.

Preparation instructions

You will need a computer running Unix and which can run Fetchmail. Your
mail will be downloaded to a file (in mbox format) onto that computer.

If Fetchmail is not installed to that computer, you need to install it (which needs root privileges) or get the admin install it for you.

You will have to add your Gmail e-mail address and password to a config
file which will be stored on the Unix computer running fetchmail. If
this is not secure enough for you, then please don't continue.

Gmail setup instructions

Create a Gmail account if you don't already have one.

Log in to Gmail in a web browser (can be on any computer).

In Settings (the gear button) / Forwarding and POP/IMAP, select Enable
IMAP, and save the changes. Click on the Save button.

Create two labels, one of them (let's call it foo auto) will be
used by Gmail to track which messages have been downloaded already.
After a download, Gmail will automatically remove that label. The other
label (let's call it foo manual) is for your reference only.

If you already have some e-mail in your inbox to be downloaded, apply
both labels to them, manually (or by searching).

If needed, set up a filter (in Settings / Filters / Create new filter)
which will apply both labels to incoming messages you want to be get
downloaded.

Unix computer setup instructions

Log in to the Unix computer you wan to download the messages to.
Typically such login is done using SSH.

Install fetchmail, put it to $PATH (usually /usr/bin/fetchmail from
package). Version 6.3.9 works, but probably older versions work too.
On Debian and Ubuntu, it is as easy as running (without the $):

$ sudo apt-get install fetchmail

Create a directory which will hold your downloaded mail. For example:

$ mkdir downloaded.mail
$ cd downloaded.mail

Create a plain text config file with the contents below, and copy it
to the Unix computer, to the downloaded.mail directory. Typically the copying is done using scp or rsync. Don't forget to change the USERNAME, PASSWORD and foo auto settings to reflect your Gmail account. (If it's inconvenient for you to edit files on the Unix computer, change it first locally, and then copy it again).

Make sure the config file on the Unix computer has the filename download.fetchmailrc. Here is
an example how to rename it:

$ mv download.fetchmailrc.txt download.fetchmailrc

Revoke other users' access to the config file, protect your
password from being stolen. Please note the star at the end:

$ chmod 700 download.fetchmailrc*

Create the output mbox file and protect it:

$ : >>download.mbox
$ chmod 700 download.mbox

In the downloaded.mail directory, run:

$ fetchmail -f download.fetchmailrc

This will download and append all your Gmail messages with the label
foo auto to the file download.mbox to the downloaded.mail directory
on the Unix computer, and remove the label foo auto, so when you run
the command again, messages already downloaded won't be downloaded
again. (Gmail labels are global, so you have to define additional
labels if you want to download mail to several computers.)

If needed, set up a cron job which will download automatically for you.
Typically you can download once per minute, once per hour or once per
day using cron jobs.

Incremental download instructions

Log in to the Unix computer you wan to download the messages to.
Typically such login is done using SSH.

Change to the downloaded.mail directory:

$ cd downloaded.mail

In the downloaded.mail directory, run:

$ fetchmail -f download.fetchmailrc

Messages already downloaded won't be downloaded again, because they
don't have to foo auto label anymore. (Gmail labels are global, so
you have to define additional labels if you want to download mail to
several computers.)

2014-10-21

This blog post presents some of the speed measurements I've done with alternative implementations of memset.

I was filling a 1 GB memory area 20 times (equivalent to memset(a, 0, 1 << 30) each) with various different implementations, and measuring the speed. I've compiled the program for i386 (gcc -m32) and amd64 (gcc -m64), I ran it on desktop PC running 64-bit Linux 3.13.0 on a Xeon CPU (Intel(R) Xeon(R) CPU E5-1650 v2 @ 3.50GHz) with 32 GB of RAM. I was compiling the code with GCC 4.6.3, with optimization level gcc -O2, because gcc -O3 optimized the 1-byte-at-a-time loop to a 4-bytes-at-a-time loop.

It turned out that there was no measurable difference in the i386 and amd64 version of the programs. Here are relative speeds (higher numbers are proportionally faster) of user times:

I can interpret the numbers except for memset the following way: the cache doesn't help at this size; CPU does correct branch prediction (or taking the branch is fast enough), or taking a branch is faster than writing to memory; the data bus between the CPU and the memory can take 4 bytes at a time.

But why is the assembly instruction rep stosd that much faster than any of the loops? What's the magic behind it? It looks like my CPU had an optimized rep stosd and rep stosb built in, called ERMSB. More details in the PDF available from here (search for memset within the downloaded PDF).

2014-09-08

This blog post explains how hyperbolic discounting causes you to change your mind, and how to trick it so that you can keep to your commitments without using up too much willpower.

A student who doesn't learn as much as he wants to

A student has an exam on Friday. He has 4 days (Monday, Tuesday, Wednesday
and Thursday) to learn. The more he learns, the more points he will get for
the exam. On each learning day, he may choose to skip learning and go to a
party instead. It happens that the previous week he decides that he will
learn for 3 days, but during the week it turns out that he learns only once,
and goes to parties on 3 days. How can this happen?

Explanation with hyperbolic discounting

This can be explained by assuming that the mind does
hyperbolic discounting,
i.e. it computes today's utility of a future event by dividing the utility
of the event divided by the number of days of delay. See the more specific
example below:

Hyperbolic discounting for time for the utility of u today:

same thing tomorrow: u/2

same thing in 2 days: u/3

same thing in 3 days: u/4

same thing in 4 days: u/5

same thing in 5 days: u/6

Utility function for today:

take the exam, get x points (where x is the number of days previously spent learning for the exam, x in 0..4): 60*x

party: 29

have a rest: 0

learn: 0

Previous Sunday:

learn on Monday–Friday: 60*4/6 = 40

party on Monday–Friday: 29/2+29/3+29/4+29/5 = 37.216...

party on Monday–Tuesday, learn on Wednesday and Thursday: 29/2+29/3+60*2/6 = 44.166...

learn on Monday–Tuesday, party on Wednesday and Thursday: 60*2/6+29/4+29/5 = 33.05

party on Monday, learn on Tuesday–Thursday: 29/2+60*3/6 = 44.5

party on Monday–Wednesday, learn on Thursday: 29/2+29/3+29/4+60/6 = 41.416...

... (there are a few other cases)

what will happen: The student decides to party on Monday, and to learn on Tuesday--Thursday.

Monday:

learn on Monday–Thursday: 60*4/5 = 48

party on Monday, learn on Tuesday–Thursday: 29+60*3/5 = 65

party on Monday–Tuesday, learn on Wednesday–Thursday: 29+29/2+60*2/5 = 67.5

party on Monday–Wednesday, learn on Thursday: 29+29/2+29/3+60/5 = 65.166...

party on Monday–Thursday: 29+29/3+29/3+29/4 = 55.583...

... (there are a few other cases)

what will happen: The student parties on Monday, and decides to party on Tuesday, and to learn on Wednesday--Thursday.

Tuesday:

learn on Tuesday–Thursday: 60*3/4 = 45

party on Tuesday, learn on Wednesday–Thursday: 29+60*2/4 = 59

party on Tuesday–Wednesday, learn on Thursday: 29+29/2+60/4 = 58.5

party on Tuesday–Thursday: 29+29/3+29/3 = 48.333...

... (there are a few other cases)

what will happen: The student parties on Tuesday, and decides to learn on Wednesday--Thursday.

Wednesday:

learn on Wednesday–Thursday: 60*2/3 = 40

party on Wednesday, learn on Thursday: 29+60/3 = 49

party Wednesday–Thursday: 29+29/2 = 43.5

... (there is one more case)

what will happen: The student parties on Wednesday, and decides to learn on Thursday.

Thursday:

learn on Thursday: 60/2 = 30

party on Thursday: 29

what will happen: The student learns on Thursday.

Friday:

take the exam on Friday: 60

don't take the exam on Friday: 0

party on Friday: 29

what will happen: The student takes the exam, scoring 1 point.

So hyperbolic discounting causes the student to change his mind several
times during the week: he will do something else on Tuesday and Wednesday
than what he decides on Sunday. On Sunday he was planning to learn later
next week, and get 3 point on the exam. But he will end up learning only
on one day, and getting only 1 point on the exam.

It can be proven that with exponential discounting you would never change
your mind: the relative utility of outcomes remains the same as time passes,
there won't be any inconsistencies.

Unfortunately, some experiments show that hyperbolic discounting is
hardwired into your mind, and you have no conscious power over it, you can't
replace it with exponential discounting at will.

Excercising willpower is easier said than done, particularly that each
person has a limited amount of willpower, and if it gets depleted, one has
to wait for it to get refilled. So depending on the challenges you had
previously on that day, you may have no willpower left to stay away from the
party.

Another option you have is limiting and committing yourself to the
same choice for a prolonged period of time. For example, if the student
commits himself to do the same activity on Monday, Tuesday and Wednesday,
then he will choose learning on all the 4 days, and he will actually do it.

2014-08-16

This blog post contains come comments about finding all triangles whose sizes are integers and the area/perimeter ratio is the integer r. This is related to Project Euler problem 283.

The blog post contains a nice analysis and some pseudocode for generating all possible (a, b, c) size triplets for the given ratio r. Unfortunately both the analysis and the pseudocode contains some mistakes.

The analysis incorrectly states that v must be smaller than floor(sqrt(3)*u)). The correct statement is: v must be smaller than sqrt(3)*u. Before the correction some solutions such as (6, 8, 10) for r=1 were not generated.

The condition d1 <= lhs / d1 had a >= instead.

Please note that the algorithm yields each triangle once, in an unspecified order of a, b and c within the triangle.

2014-08-12

This blog post summarizes what I learned about C++ today about how the lifetime of temporaries is extended because of references.

Normally a temporary is destroyed at the end of the statement, for example temporary string returned by GetName() lives until the end of the line defining name. (There is also the return value optimization which may eliminate the temporary.)

However, it's possible to extend the life of the temporary by assigning it to a reference-typed local variable. In that case, the temporary will live until the corresponding reference variable goes out of scope. For example, this will also work and it's equivalent to above:

2014-07-29

This blog post explains how to overwrite the git committer name, git committer e-mail and git committer date in previous commits in a git repository. This is useful e.g. after a series of git commit -C ... calls, which copy the author name, e-mail and date from the specified commit, but use the committer name, e-mail and date from the environment of the command.

Please note that rewriting git history is potentially dangerous because it can lead to data loss and synchronization issues with others who pull from the repository. Read more about it in Git Tools – Rewriting History.

First make sure you don't have any uncommitted changes: git status -s shouldn't print anything.

Then checkout the relevant branch, and run this command on a Unix systems (or within Git Bash on Windows), without the leading $: