Perspectives on design, technology, medicine, and intersections of these

Background

I am a competitive swimmer with the Cambridge University Team and rower with Hughes Hall College. In high school, I was quite serious in swimming, with my top finish 7th in Canadian Nationals for 17&18 year olds. I retired from the sport for seven years before coming back when I came to Cambridge. Part of this process was losing 35lbs (16kg), partly from training but a lot from nutrition.

A lot of sports performance is individual – people respond differently to different techniques. It is important to experiment, collect data, and think critically to find what works for you. Most of the suggestions are anecdotal – your mileage may very. This goes for in training as well – the best athletes try to make every single stroke perfect, and are self-aware and proactive on their technique, fitness, and weaknesses. This article is focused at regional to national level student athletes, but others may find it useful.

Things that happen outside the water/gym are very important to final performance. I would say success is 40% training, 40% looking after yourself, and 20% mental.

Measuring

Buy a digital scale and weigh yourself everyday at a comparable state. Your weight can fluctuate several kg throughout the day, so aim to measure repeatably. I usually measure just before breakfast. Think critically about the result – particularly if you’ve changed something. It can serve as a good early warning that you are training/eating too little/much or getting ill. Try to anecdotally see if weight correlates to training times and energy levels. Your average weight shouldn’t change by more than about 1kg/wk. But don’t be too hung up on weight – short term changes can mean a lot of things and weight itself is a poor analog for fitness.

Supplements

I take 3-5 supplements a day including a multi-vitamin, Vitamin C, fish oil, and often a protein shake or bar. I think daily multi-V’s are a good idea for everyone – other choices should be tailored to what you think you need. Example: I have asthma, and don’t eat fish, so fish oil is good for Omega 3’s.

Water/ Drinks

Chocolate milk is becoming popular as a post-training recovery – it has a good blend of protein and carbs. Sports drinks can be useful in long sessions to keep energy up without becoming bloated.

I find often when I think I’m hungry, I’m actually thirsty. If you’re looking to lose body fat, drinking more water is one of the first things to do. Weight loss is difficult when training – eating too little impacts performance. If you’re looking to lose weight, do it slowly and make sure you eat before training to keep fueled.

Meal Timings

I try to eat 5-6 smaller meals a day. Try to eat a small meal ~2 hours before training and within an hour of finishing. The pre-workout meal should be light on the stomach, and more focused on carbohydrates. Experiment on timing – the balance is between feeling bloated in a session and running out of energy. This can be hard in morning sessions – consider bringing a sports drink instead of water if you can’t eat before. On race days, I often have a large dinner, very small breakfast, and fuel morning races on sports drinks.

The post-workout meal should have protein and carbohydrate. Chocolate milk and protein/ recovery bars are good choices.

Try to cut down on snacking between meals – being hungry sometimes is ok (but not during a training session).

What Should I be Eating?

I would say most amateur athletes eat too many carbohydrates and not enough protein. Probably 3-4 of my meals have a considerable amount of protein in them. Protein shakes can be an easy and economical way to get more protein. There seems to be evidence that the body absorbs protein better in smaller doses through the day than one big meal. Lean meats (chicken, turkey, fish, eggs) are better choices. Protein will leave you more full and help muscle recovery.

Carbohydrates you do eat should be denser and complex. Simple carbohydrates are basically sugars – sweets, cakes, white bread. They don’t give a lot of long-term fuel. Aim for whole grain pasta, brown rice, cous cous, and beans. I don’t eat very much bread, and when I do its usually with protein (eggs on toast, ham baguette).

Fruit and veg is good for the vitamins and anti-oxidants and to fill you if you’re hungry, but they don’t really fuel performance or recovery. The exception would be beans which have a fair bit of protein. I eat a bean salad or baked beans almost every day. One of my favorite snacks is a tin of chickpeas or bean salad with a bit of olive oil, balsamic vinegar, and thyme and dill on top. Super easy and quick, lots of complex carbs, protein, and fibre. And about the right size for a pre-training meal that won’t overfill.

In general, eating simpler is better. I usually season with spices and herbs rather than sauces. Cooking at home is easier to control what you’re getting compared to eating at hall or takeaway.

Consider alternatives. I like crips/chips, but a lot of that is a crave for the salt or crunchyness. Rivita or rice cakes are a good alternative. High cocoa dark chocolate is actually quite good for you in moderation. Fruit and berries if you’re craving sweet.

Read labels on food to develop a feel for what different things mean and guide choices. Is 5g of protein a lot? Is 50g? Is 400 calories a snack or a meal? Fat and salt are ok but try to reduce saturated fats. Is a chicken caesar or fried chicken wrap a better choice for lunch? (answers: a chicken breast is about 20g of protein – at 50g your body might not be able to absorb all of it. 400 calories would be a small meal if eating 5-6 a day.)

Sleep and Time Management

We’re all here first as students. Cambridge is a lot of work, and part of being an athlete here is staying on top of your work so you don’t miss training and get enough sleep. There is a big difference in recovery, energy, and focus from getting enough sleep. Having time also lets you prepare for training sessions and have good foods in the fridge.

Balancing

Moderation is important. Avoid absolute rules of “never eat X” – you will eventually break them and give up completely. Stay social and sane – being a student athlete can be a grind when it feels like all you are doing is eating, sleeping, training, and working. Make time for fun!

Alcohol

Generally not good for performance – alcohol dehydrates and there’s quite a lot of empty calories. Big nights out often include missed sleep and poor eating choices. Try to moderate, especially when approaching a competition. Rehydrate when you get home and try taking an anti-oxidant like vitamin-C.

Wine (red) and spirits (without mix or with something like water or tonic) are better choices.

Stretching and Massage

Aim to stretch ~10 min before and after each training session. A foam roller is a good idea for knots in the back and legs that develop in heavy training. A tennis ball works too. A deep massage/ rolling session can take a few days to recover from – don’t do it too close to a competition.

For some time, I have been into entrepreneurship. More recently, I have become a better programmer while doing my PhD at Cambridge. This has led to http://www.rasterfarian.co being launched recently.

Rasterfarian is a photo enlargement web-based application.

Rasterfarian’s logo

It started from two directions. One was trying to apply image processing techniques I’ve learned in my PhD to something more widely applicable. The second was the need to expand small photos without losing quality.

My department has a small logo, which as far as I know only exists on its website. I had wanted to put it on a poster presentation, but blowing it up ~20X made it look terrible – really pixelated. I wondered if there was a way to enlarge it without losing so much quality.

Department logo before expansion

Department logo after enlargement on Rasterfarian

It turns out, there is. The general method I use is similar to bicubic interpolation (http://en.wikipedia.org/wiki/Image_scaling#Algorithms) – but with a few refinements and improvements. I use Python’s Scikit-Image Library (http://scikit-image.org/) to power much of the algorithm, and host on Red Hat Openshift (www.openshift.com). One day I’ll write a post about the implementation of everything.

Traffic has been interesting to the site. The first wave was from friends on Facebook and Twitter. This traffic came quite quickly, and crashed my server a few times so I upgraded. I made some improvements to file handling, scaling, and the UI from feedback from this round.

The second stage of traffic was from Hacker News. This again was a quite strong pulse of traffic. My servers stayed up for this, and I made further improvements to the UI from feedback from HN users. In particular, I improved the site’s mobile appearance and started using Bootstrap. For much of my early development, I didn’t test as much as I should have, especially with different devices and browsers.

More recently, I’ve been getting a lot of my traffic from Google Adwords. This has been quite interesting (and thank you Google for ad credits for start-ups – I wouldn’t have stuck with you this long without them!). Many of my users are from India, the Philippines, and Romania. Its exciting that people all over the world are using my app, but I hadn’t expected this. My page is in English, and I don’t know if its translated or easily usable for speakers of other languages. I had put ads on my site to see if they would be profitable – but the are targeted at a USA and UK audience.

Google Adwords seem to direct traffic to my site, and when linked to analytics they optimise the traffic well to users that convert. The optimisation takes a few days, and changing parameters in Adwords seems to break the optimisation each time. I’ve learned to avoid reacting or experimenting too quickly – it takes a while to get proper results. I’ve used Bing Adwords as well and don’t seem to get as good of results, but this could be that I’m tracking with Google Analytics :)

The ad on my site is through an affiliate marketing program. I have a decent click-through rate, but no conversions so far. I am waiting to collect enough data, but I’m not sure this model works for my site. I had hoped to create a small amount of income to have some greater freedom and independence, not a major venture-funded business (on this idea!). So far, this model doesn’t seem to be working. Further, if many of my visits are from Google Adwords, its like I’m paying for people coming in, hoping they will just leave. Should I target visitors to my ad or to use my site? These may be conflicting goals.

When making the site, I’d worried about things that were lower priority or in the distance. How do I scale to hundreds of users on the site at a time (I typically have only hundreds per day, several weeks after launch)? How do I remove all the data my users will be generating? Instead, I should have tested more immediate things like UI/ UX on mobile devices and IE.

There continues to be a long list of updates I’m hoping to add to Rasterfarian. I also have several other ideas for small apps I plan to launch, all in the computer-aided design space. Rasterfarian and these other apps will be under the Makeraas (Maker As A Service) brand. Makeraas is a company I’ve started to democratise computer aided design (CAD) software. CAD for engineering and artists has become quite specialised and difficult to use – but many final applications like 3D printing and digital photo sharing has never been easier. My goal is to lower the learning curve in using CAD to make these new technologies more accessible. Rasterfarian, for example, enlarges photos in a way that is relatively easy in Photoshop or GIMP, but most people don’t have the time to learn the software.

I recently submitted the following paper for a research skills course as part of my program at Cambridge. I have decided to post it here since people may be interested in learning more about his life and work given his recent royal pardon. The focus is on his “Computing Machinery and Intelligence” paper, but his other work and life are also included. I welcome any discussion in the comment section below.

1.1 Background on the Author

Alan M Turing was a remarkable man, whose breadth and quality of contributions make him deserving of mention in the same caliber as such greats as Newton, Darwin, and Maxwell. In addition to Turing the mathematician, Turing the person is nearly as fascinating and more closely resembles the tortured souls seen in great artists than is typically seen in great scientists.

Turing was born in London, on 23 June 1912, to a middle class family [Turing, 2012]. He was seen as bright at an early age, inventive, and skilled in both words and mathematics. In 1930, he won a scholarship to study the Mathematics Tripos at King’s College, Cambridge. He attained only a second in his Part 1, but went on to be elected a Fellow of the college upon graduating, at age 22. A close friend, Christopher Morcom, died just before Turing was to start at Cambridge.

This effected him throughout his life – at times seeming to provide him with motivation, and at others – loneliness.

The first contribution to bring attention to Turing was his solution to Hilbert and Ackermann’s “Entschudungsproblem”, which they had proposed in 1928 [Beeson, 2004]. The problem asks if an algorithm exists to take a set of axioms and a conjecture and determine in a finite amount of time the proof of the conjecture from the axioms, or to state that no proof exists. Turing solved the problem by considering his Universal Machine, proving shortly after Alonzo Chruch that some problems are unsolvable in a finite amount of time. The Universal (Turing) Machine was one of Turing’s great contributions. It has a tape and a read/write scanner. The tape has many `bits’ on which information is either read from or written to by the head. The head also knows what state it is in. From this only, this quite simple machine was proven by Turing to theoretically be able to model any other machine, and it formed the inspiration for all subsequent computer architectures.

Turing went to Princeton to work with Church, earning a PhD for two years of his efforts [Turing, 2012]. Following that, he returned to King’s College until the outbreak of World War Two. Through the war, he was largely based in Bletchley Park, working as one of the lead cryptographers and played a major role in defeating the Enigma code. This role was instrumental to the Allied war effort in protecting the Allied merchant fleet from German U-Boats in the North Atlantic. He was awarded an Order of the British Empire for his efforts.
Following the war, Turing joined the National Physical Laboratory to design some of the first computers. Through this time, he wrote internal reports and discussed machine intelligence with colleagues. But it wasn’t until he joined the faculty at the University of Manchester that he published “Computing Machinery and Intelligence”, having left the NPL in frustration with the slow progress of building computers there. This paper is one of the seminal papers in the eld of artificial intelligence, and continues to be controversial in the fields of psychology and philosophy due to the implications of having intelligent machines.

Having already made major contributions in computer science, cryptography, and artificial intelligence, Turing again wrote a seminal paper entitled “The Chemical Basis of Morphogenesis” in 1954, which probably makes Turing the father of computational biology as well [Turing, 2012].

Turing died from cyanide poisoning on 7 June 1954, in an apparent suicide. A few years before, Turing admitted to being homosexual, a crime in Britain at the time [Leavitt, 2007]. He was sentenced to be given estrogen injections, lost his government security clearance, and was no doubt embarrassed publicly by this. To have lived his life and accomplished what he did first by living in secrecy followed by persecution adds to the legend of Alan Turing. In June 2013, Turing was finally issued a pardon by the British Government.

1.2 Summary of the Paper

“Computing Machinery and Intelligence” proposes a way to test if machines can think. In doing so, Turing provided a jolt to the field of Artificial Intelligence, and the paper provided motivation for much of the early work in the field. While AI eventually began to focus on individual applications, the paper remains relevant in the fields of psychology and philosophy, where Turing’s ideas study the human mind in a more systematic way [Millican and Clark, 1999].

Turing proposes an imitation game, where an examiner communicates with a hidden human and a hidden computer through teletype. The examiner must determine which of the examinees is which. If the computer is able to fool the examiner, it is said to have passed the (Turing) test. (Parenthesis to indicate the name “Turing Test” was given to the imitation game after Turing proposed it in the paper). The test is based on a game where the examinees are one each of a man and women, the description of which has had interesting commentary in
the context of Turing’s homosexuality and awkwardness with the female gender by
Leavitt and others [Leavitt, 2007].

While the (Turing) test is probably the most famous outcome of the paper, a minority of the manuscript is dedicated to it. Instead, Turing extends beyond the test to propose potential qualities of a machine that might eventually surpass his test, as well as address nine potential criticisms of the test. Turing’s description of the type of digital computer that may pass his test is remarkable in it similarities to how computer design has advanced. The basic architecture of storage, executive unit, and controller remains unchanged. He correctly predicted massive gains in storage and processing power, and anticipated that software would be the major limiting factor in AI. Anticipating the difficulty in manually coding a variety of behaviors, Turing proposes machine learning as a way to program an AI machine.

Of Turing’s nine possible objections, three can probably be discarded today as lacking scientific merit: the objections from theology, consequences-are-too-dreadful, and extra-sensory perception. The remaining objections and Turing’s responses will be summarized.

The mathematical objection is from theory of Turing, Church, and others proving that discrete-state machines have inherent limits on their capabilities. Turing’s response to this is that while machines may have inherent limitations, human intellect probably does as well. He feels the imitation game is a good test in the context of the mathematical objection.

Lack of consciousness is the second continuing objection, which has since formed the distinction between strong AI (has consciousness) and weak AI (does not). Turing recognizes that AI could pass his imitation test without having consciousness, and somewhat dismisses consciousness as being relevant for the ability to think. In part, his argument relies that consciousness is not well understood in humans, so it should not be used in a test of other entities. This objection was later raised in Searle’s famous “Chinese Room” paper, which will be described in the next section.

A third objection to Turing’s test is lumped objections that a machine may not have X, where X is a personified quality such as kindness, a sense of humor, ability to love, ability to learn, or to enjoy strawberries and cream. The response is twofold: Turing objects to the arbitrariness of these qualities as a test of intelligence, and Turing believes most of these things possible for a machine to do.

A particular quality of X that Turing separates is inventiveness, also know as Lady Lovelace’s objection. In the context of Babbage’s early computers, Lady Lovelace commented that computers are limited in that they cannot do anything new; they can only do that which is previously instructed to them by their programmers. Turing objects, believing machine learning will allow for machines to initiate new knowledge. Today, this objection stands little ground given the importance of computers in so many fields, including algorithms that are able to make mathematical proofs that humans cannot.

The continuous nature of the nervous system is raised as a possible objection to the possibility of discrete or digital systems thinking is raised. Turing defeats this quite easily by arguing that with probability and decimals, continuous systems can be modeled by digital ones. Today, we would probably consider the nervous system to be more similar to a digital system than Turing considered.

Finally, the informal nature of human behavior compared to machines is mentioned as an objection. Turing uses an example of knowing what action to perform at a changing traffic light. Turing’s proposal is for the machine to have rules imposed on it to act more human – rules to seem as if it was less governed by rules, as it were. The commentary is interesting, coming from an eccentric man who seemed to have little patience for social norms himself [Hodges, 1992]. Today, uncertainty and probability are large focuses in AI, and would probably be the first approach for making a computer seem more informal in its behavior.

At the end of the paper, Turing concludes with speculation on the future of the field. Two main approaches are proposed: focusing on an abstract activity, such as chess, or giving a machine sensory “organs” and teaching it like one would teach a child. In this, he foresees computers playing chess at the level of men (we know now that they surpassed our ability in chess in the 1990s when Deep Blue defeated World Champion Gary Kasparov). Machine learning also now plays a great role in AI, but it is the first approach of specialized, expert applications that have proved to be the main role of AI in the modern world. The paper
ends with a highly quotable phrase, applicable to many fields of science in general:

“We can only see a short distance ahead, but we can see plenty there that needs to be done.”

1.3 Importance, Impact, and Fallout of the Paper

Turing’s paper is special in the breadth of its impact – over sixty years later it maintains relevance in computer science, philosophy, psychology, and cognitive sciences. It is one of the foundational papers for AI and machine learning. Even with the benet of hindsight, Turing’s predictions are remarkably correct considering how young the field was when he created them. As explained in other sections, the impact of AI was slower than expected, but is substantial in the modern world.

The strongest criticism of the paper came from Searle in his “Chinese Room” argument [Searle, 1980]. Searle takes us though a thought experiment, proposing he is trapped in a room with a cypher of Chinese characters. Written Chinese messages are fed to him, and he uses the cyphers to write responses. The implication is that he has no idea what the messages are actually meaning, and is of course an analogy to a machine answering questions in a similar way, without understanding meaning. Searle does concede that the definition of “understand” is considerably more dicult than it appear on first glance. While

Turing does not get the benefit of being able to respond to Searle’s attack, his original paper did anticipate this attack so we may assume he would assume that he would maintain his position that consciousness is different than the ability to think. The additional lead is that one must question the room-confined Searle. If he is unable to understand the Chinese characters being passed to him, does this make him unable to think?

Boden comes to Turing’s defense in Searle’s attack, largely by pointing out that the appeals Searle makes to the uniqueness of biologic systems compared to engineered systems are at best irrelevant and at worst incorrect [Boden, 2003]. Boden argues that there is nothing “special” about humans: computer vision can functionally equal our own, and our neurons behave in a digital way that is not unlike a silicon transistor.

This argument is in some ways improved by the perspective of Newell and Simon [Newell and Simon, 1976]. Even in 1976, they viewed computer science as an empirical science – a far cry from the logic-based mathematics initiated by Turing and his colleagues. Increased complexity of computers lead to them being regarded as “the programmed living machine…the organism we study” by the duo.

2.1 Description of Artificial Intelligence and Machine Learning

Turing’s “Computing Machinery and Intelligence” is a seminal paper in artificial intelligence and machine learning, and continues to be debated in robotics, computer science, psychology, and philosophy today. While the paper had far reaching and interdisciplinary implications, this analysis will focus on the implications to applied sciences.

Even the very definition of Artificial Intelligence (AI) is controversial. When trying to describe intelligence, one soon recognizes that their description is limited by observations and experiences of a human, and only one human at that. Intelligence is considered an important distinction of the human experience, but we find that in defining intelligence we declare agency in a highly personified way.

Worse, it is our instinct to assign intelligence by factors that, on examination, are arbitrary: use of language, maths, or tools, for example. We find ourselves no further along than Descartes, who ascribed agency to himself by declaring, “I think, therefore I am”. Unfortunately, this leaves us unable to evaluate other entities.

Turing’s solution, as will be outlined and analyzed in more detail in the second part of this paper, was to avoid any one criteria for assessing intelligence, and instead proposed that an intelligent machine is one that is able to successfully imitation of a human in written conversation. This trial has since been named as the “Turing Test”, and probably remains the best tool for evaluating artificial intelligence, although little recent effort has been put into designing a machine to defeat the test [Rich and Knight, 1991] [Haugeland, 1989].

It is perhaps difficult to separate early computing and artificial intelligence, as the young fields had yet to specialize. It could be argued the field of computing started quite early, philosophers such as Descartes and Hobbes contemplating the nature of the mind and machine, and similarities between the two. Early mechanical computers were proposed by Wilhelm Schickard (1952-1635), Blaise Pascal (1623-1662), Gittfried Leibnitz (1646-1716), and Charles Babbage (1792-1871) [Haugeland, 1989]. Arguably, Babbage’s design was the first that could be considered a computer rather than a calculator, and Lady Lovelace’s contribution to the field in documenting and explaining Babbage’s never-built machine should be noted. In 1936, Turing proposed the Universal (Turing) Machine, a theoretical architecture for a computer that inspired the von Neumann architecture that is used in nearly all computers now [Haugeland, 1989]. A Universal Turing Machine has a tape and a read/write scanner. The tape has many `bits’ on which information is either read from or written to by the head.

The head also knows what state it is in. From this only, this quite simple machine was proven by Turing to theoretically be able to model any other machine. Artificial Intelligence was first seriously considered around the time of the first computers, around 1950 when Turing published his paper proposing the “Turing Test”. From 1952-1969, quite good academic progress was made in AI, led by MIT, Stanford, Carnegie Mellon University, Dartmouth, and IBM [Boden, 1990].

Many of the early proposed problems in the field were solved, most being simple examples of playing games, word recognition, algorithmic problem solving, machine learning, or solving maths problems [Rich and Knight, 1991]. However, progress soon slowed as managing complexity of problems proved more difficult than had been anticipated.

The 1990’s saw again considerable progress, with considerable improvements in speech recognition, autonomous vehicles, industrial systems monitoring, and computers playing board games better than the best humans [Mitchell, 1997] [Norvig and Russell, 2003]. Many of these have since been commercialized, or are on the cusp of being so. Through this process, AI has become application-specific, and most work is focused on applications with more utility than passing the Turing Test [Norvig and Russell, 2003]. Dennett, who was Chair of the Loebner Prize Competition in the 1990s (a Turing Test challenge), questions if the Turing Test will ever be beat, and considers attempting to pass the test not useful research for serious modern AI [Dennett, 2004]. Turing himself anticipated the test would be
challenging to pass: he once predicted that by 2000 a machine would exist that would have a 30% chance of passing the test in a ve minute conversation [Norvig and Russell, 2003]. Later, he said in a radio interview that he expected it would be over 100 years (from 1952) until a machine was built that would reliably win the imitation game [Proudfoot and Copeland, 2004].

Theoretical AI has, in response, created some divisions within itself. Firstly, between strong and weak AI. Weak AI can act like it is intelligent, whereas strong AI is intelligent and conscious [Haugeland, 1989]. A criticism of the Turing Test is that it may allow an entity to pass which is weak AI. Modern AI has distanced itself from Good Old Fashioned AI (GOFAI), which was an approach based more on strict logic [Haugeland, 1989]. The present interest is more focused on the management of uncertainty through probabilistic systems.

An important technique in AI is machine learning, which is dened as the performance at a task improving with experience [Mitchell, 1997]. Learning was proposed by Turing in the paper as being a possible technique to enable flexibility in a machine. The first examples were seen as early as 1955 [Proudfoot and Copeland, 2004]. But it wasn’t until the 1990s that the field of computing advanced sufficiently for great advances to be made [Mitchell, 1997]. The general learning technique is similar to as proposed by Turing: the program is given a function such that it learns through trying to avoid a previously experienced “pain” of a
mistake. However, the actual implementation of these algorithms in a practical sense is probably more dicult than Turing and his contemporaries anticipated [Mitchell, 1997]. This in part explains the slower than expected progress in AI.

However, machine learning, and artificial intelligence in general, both benefits from and produces useful cross-pollination with other fields including statistics, philosophy, biology, cognitive science, and control theory [Mitchell, 1997].

2.2 Importance to Engineering and Industrial Practice

While Turing discussed AI in a quite general and academic sense, the field has since become application-specific, where programs are written for a niche task. Robotics, including autonomous vehicles, are probably the most mechanical of the applications of AI, and are of growing importance [Proudfoot, 1999]. Industrial processes and systems are increasingly controlled by AI and AI-inspired systems [Norvig and Russell, 2003].

In some ways, AI remains heavy on potential compared to current benets realized from it. Futurist Ray Kurzweil envisions AI being a key technology of a process that will create unprecedented change in our society [Kurzweil, 2001]. He argues that technology improvement and adoption follows an exponentiation rather than linear growth curve, as is seen in Moore’s Law of computer transistors. This is in part because improved technology allows the design of further improved technology. Assuming AI follows a similar progress, the relatively slow early progress is to be expected, with accelerated returns to be seen in the future.

This leads to what Kurzweil calls the Singularity, which is the point when the slope of this technology-growth curve becomes so steep as to be effectively asymptotic. Kurzweil predicts similar and inter-connected growth in neuroscience and (especially bio-flavored) nanotechnology. According to Kurzweil, when this all happens, we will have essentially infinite access to technology. One outcome of this that Kurzweil is fond of mentioning is that we would see our own immortality [Kurzweil, 2001]. So, at least according to Kurzweil, the implications of AI to engineering, industry, and beyond are vast.

Conclusions

Alan Turing made considerable contributions to computer science and affiliated fields. “Computing Machinery and Intelligence” is one of several seminal papers contributing to the legend of the author. The paper has had wide impact, beyond engineering and computer science to shake the very foundations of how people view themselves. Like Galileo and Copernicus, Turing has forced us to reflect on our own state of being. On a more practical sense, he pioneered in this paper the fields of AI and machine learning, which have recently proven to be of considerable value. One can only anticipate that computer-based intelligence will play a greater role in the future of humankind.

In addition to his notable contributions to science and academics, Alan Turing is a fascinating man, and one whom history is only beginning to give proper recognition to. One can hope that the future will give the acknowledgment to Turing that his contributions to an increasingly important field deserve.

Growing up, thought I was good at the sciences, less good at reading and writing, and lousy at the arts. How did I know? That was what my grades told me in school.

The problem is, for young minds, this labelling is false and harmful.

Art in school is how straight you can draw lines and if you can paint without crossing over the lines. At least, this is what art is until the later years of school. Then, you are taught what art really is, which is communicating a message between the creator and the person experiencing the art. But by the time I was in high school, I had convinced myself that art wasn’t something I was good at or needed to be good at.

I don’t consider myself an artist today, but one closely-related role I do play is a designer. This happened through an education in mechanical engineering, where my focus became on medical devices with additional interests in energy and space exploration. This was around 2008, so not that long ago, but pre-iPad and pre-Kickstarter.

The result was that my design education and journey has been in this era where product success is so intertwined with “soft” design: user experience, human factors, and communicating a message to the user. This part of design engineering is far, far more about art than it is math or physics.

I now find the “art” of design to often be more enjoyable than the “science” of it. But I feel like I’m playing catch-up, I wish I hadn’t been pigeon-holed as “not an artist” when I was far too young for anyone to know what I was going to be.

I think feedback and even ranking systems are good, even for young students in grading. Feedback helps to improve future performance, and comparative ranking by grades gives competition: a reflection of life beyond school.

The problem is that performance in subject X isn’t a good decider if the person will do well at X in the real world. Studing X in school isn’t doingX. School is designed to teach basic skills (arithmetic, writing essays, researching), the ability to learn, and, lastly, content. Performance in school is judged on how well and quickly the student can learn these skills and content, which is different from how well the student can succeed in that field.

Premature judgements are made on this that can have an effect on the student’s career. I have heard a lot of people who’ve picked a particular career because they were good at it in school, or avoided another because they were bad at it. It is unfair for both the student and society that we are doing a poor job of helping people to careers they will enjoy and be good at.

We need to develop a culture in schools and the institutions that surround them that failure is ok. Labelling people who are “good” or “bad” at certain things in school needs to stop, especially for young students. These self-images can remain with students for a long time. Instead, we should try different approaches with young students who are struggling. We shouldn’t let the student or anyone else say they are “good” or “bad” at something before anyone can possibly know.

Rejection is hard – both for the people receiving it and giving it. I am writing about my experiences with rejection and failure over the past year. Some of the people I hope may find these thoughts useful for:

People frustrated with failure

People who have to reject others

Myself, whenever I’m in again in either of the above categories

Myself, looking back at these events from the future

Someone considering doing a PhD

Within this past year, I’ve grown to recognize failure and rejection as feedback and learning. Nobody does everything right – what’s more important is that we learn from the mistakes that we do make.

For most of my life, I’d been pretty “lucky” about failure. Me and the world had a deal: if I worked reasonably hard and went through all the right motions, things would just work out. Sure, I failed some exams or didn’t get a job I wanted, but these seemed incidental. For the big stuff, especially “career” stuff, I always won.

That changed around February last year.

I remember proclaiming to my Nana when I was about 10 that I would do a Bachelor’s of Engineering in Canada before going to grad school at MIT or Stanford. Weird dream for a 10-year-old, but I ended up staying on that path, worked hard, and graduated with a Bachelor (Hons) and Masters in Engineering from UBC in Vancouver.

I applied to five top PhD programs in the United States in the fall of 2011. From January to April, I was rejected by every program I applied to. For someone not used to failure, I was crushed.

I had been nominated for a PhD scholarship from my academic department at UBC. Scholarships make it easier for admissions because it gives external validation of the student’s abilities and also reduces the financial commitment of the university the student is applying to.

Somewhere in the middle of being rejected from schools, I received notice from the funding agency that my application was not sucessful but would be put on the waitlist. Waitlists are never good news in academia, especially with research budgets being reduced as part of austerity measures.

This was around March, and a good friend, Andrei, convinced me to apply with him to TechStars’ and Y-Combinator’s summer programs. We applied to both with a proposal for movement feedback for runners, and later other sports (RUNNR). After being passed over for TechStars, we were offered an interview with YC in April. The invitation went to Andrei, and he texted me saying we’d gotten it. I immediately replied to say it wasn’t cool to joke about that – he knew I was bummed about my PhD applications. A call and forwarded email reassured me: we were going to California!

The next few weeks, we worked very hard on making a prototype and preparing for the interview. When we applied, pretty much all we had was a vision, splash page, and small understanding of our market and competition. In the two weeks before our interview, we managed to get a prototype that sometimes worked, did some more market research online, and talked to all our friends who run.

We were confident going in to the interview. Probably overconfident. We told all our friends we’d be moving to California in a few months and had started looking into visas. I remember walking across the bridge over the Highway 85 near the YC office, and thinking the concrete wall and steel fence looked like a prison. I was heading to a parole hearing, and my luck was about to change.

The interview was a great experience and looking back I’d even say fun. The atmosphere in the room is pretty tense, to the point, and urgent. There have been a fair number of articles and a few apps aimed to help prepare for the interview. We didn’t spend a lot of time with them and I don’t think they would have been helpful for us. I think the interview tries to determine personality and character of the team, and if you’ve done your homework on your business. I’ve heard of companies that “hacked” their way in, but I think they must be rare.

We spent a lot of time before the interview prepping our pitch and prototype demo. Maybe it helped a bit, but the interviewers didn’t seem interested in the demo and we basically had to show it while holding our laptops as they were kicking us from the room. Instead of wanting to see our demo, they asked mostly about our users, market size, operations plan, and a few questions that I am pretty sure were just engineered to see how we think on our feet. In general, we were not well prepared for the interview.

After we were done, we headed to Palo Alto for the afternoon. The wait was about seven hours, and it felt even longer. While wondering aimlessly through the mansions behind University Ave, we finally got the email. We didn’t get in.

After a very quiet and wandering walk, we eventually made it to Nola’s Bar in downtown Palo Alto. Andrei was in a far chattier mood than I was:

“What could we have done differently? What are we going to do next?”

He wanted to dissect the interview and what we could have done. I wanted to save that for later, and just relax and try to enjoy my beer. Sensing I wasn’t up for a chat, he changed to trying to cheer me up:

“Just making it for an interview shows we’re on to something. You’re a smart guy and everything will work out.”

I’m normally a pretty relaxed guy, but I snapped. I wasn’t sad, I was angry! I had been working hard and the world had gone back on our deal. I’d been rejected again:

“Stop trying to cheer me up, I’m not sad! I’m just tired of losing!”

That was enough to get him off my back until we finished the round. Sensing I was more up for conversation, Andrei remarked that I must be feeling better. I was.

“I decided I’m not going to lose anymore”

For the most part, I’ve stuck to that. Runnr.me, Andrei and I were selected a month later for the LeWeb start-up competition in London, one of ten teams out of 600 applications.

At the end of summer, it became apparent that our product concept at RUNNR was not well aligned with what runners actually needed. We needed to completely change our product or find another market. After careful reflection, Andrei and I agreed that we weren’t that passionate about running and we couldn’t think of applications of our product that would be big enough to grow a big venture. This was almost exactly what the YC interviewers had advised when they sent their rejection email months before.

Around the same time, the scholarship I had been waitlisted for came through.

Wanting to work more on medical projects and to improve my technical abilities, I decided to try again to get into PhD programs. I have recently started a PhD in Bioengineering in England, and love my program. Andrei is working on several different projects, one of them being http://www.coderook.co/ which is a start-up to provide mentors to apprentices in software development and http://stasishq.com/ which does corporate wellness programs.

So what changed?

First and most importantly, my perspective. I realized my “deal” with the world was stupid. The world didn’t owe me anything, and karma is not something you can “cash in”. Failures sometimes just happen – to paraphrase Rocky Balboa, sometimes you just have to take the hit and keep moving forward. Maybe this is just a lesson of maturity and one I expect I will continue to learn.

I learned I needed to be a more active participant. At the highest levels of a field, I don’t think you can afford to be casual about things you really care about or want to happen. In particular, I left my first round of PhD applications largely up to the system. I emailed a few professors I was interested in working with but didn’t push nearly as hard as I should have. I felt bad taking the time of these people, who I know to be very busy and with inboxes flooded with applicants. What I should have thought was: yes, I am using their time, but if it works out they will be getting a great student. There are times when it pays off to be different levels of pushy and have a willingness to bend the rules of the system.

The second time I applied for PhD programs, I emailed around 5-8 professors directly. Two seemed interested. On this, I booked a flight to Europe to visit them and their labs. I then told some others that I’d be in Europe and would like to meet with them. I added five more meetings after booking my flight in a total of five universities. After I got back from my trip, I ended up with offers to every place I applied! Big difference in results.

When I first applied for PhD’s, there was definitely an ego component. I picked universities based on prestige and location more than where particular professors were that have research interests similar to mine. I dreamed of going to university X and dropping out to do a start-up, largely as an egotistical comparison to now-famous people who had done that. Maybe the applications committee saw this through my application, and rightly gave me a thumbs-down. Ego is a poor reason to do anything, and especially something that is a multi-year commitment.

The time between when I decided to stop working on RUNNR and when I had offers for PhD programs was interesting. It was the first time in eight years that I had taken more than a three week break, and for the first time in my life I was neither working on something nor waiting for something to start.

It was a great time in my life. I consider it a sabbatical – it wasn’t that I wasn’t doing anything, but I wasn’t working on projects that had long term goals. I bought a sketchbook and tried drawing, thinking it might help later in product design and also just to try something artistic. I read a few books a week. But probably most importantly, I spent a lot of time thinking about what was really important to me and that I should aim to do more of.

I learned the most important thing and what I put the most additional effort on were the relationships with friends, family, and co-workers. I used to take these for granted more than I should have, and relationships with other people really are the best part of life. As part of this, I learned more about the desire of finding your “tribe”. I’ve always found it easy to get along with a wide variety of people, but have never really found a group that I totally felt was my “team”. Maybe such a “team” doesn’t exist, for me or anyone. I think it’s important to look for it, but also have the independence to thrive as an individual.

The other thing I learned was my value for making my work solving meaningful and important problems. To me, most of these problems are medical. We only have one life. We spend a lot of our lives working. And, most importantly, our work can make a difference in the world. To me, it’s important to try and make that difference with my work.

I plan to go back to entrepeneurship after my PhD. Biotech is a field where I think PhDs are well justified for entrepreneurs because new technology is the main competitive advantage for many companies. Grad school is also a great time to explore, learn new things, and learn about yourself. There is such easy access to knowledge, from free journal subscriptions, to sitting in on lectures, and to visiting speakers speaking on a wide variety of topics. Playing the “student” card is also a great way to get to talk with people who normally are hard to get to talk with.

It’s been a wild ride. I’ve learned a lot, and I expect there is plenty more to learn. The failure that I was so down on only a few months ago has both made me a better person and aligned me more with my goals. In a twist of fate, I’m much happier and excited with where I’m at now that I would have been if I had gotten my way a year ago, and I’d like to think I’m wiser for the experience.

The big news story of the past week in Britain has been the cold temperatures and snowfall, which as a Canadian I am free to find fun in the fear caused by comparatively mild weather.

The second biggest story is the recent collapse of four major retailers here (from Yahoo): Comet, Jessops, HMV, and Blockbuster. Comet is an electronics retailer, Jessops does photography, HMV sells music and video, and Blockbuster is a film and game rental company. Such stories are not limited to the UK, as worldwide recessions and growth of e-retailors has hit retailers stateside also.

One simple explanation for the demise of these companies is disruption from online services, and perhaps also the rise of digital cameras and smartphones for the case of Jessops. The Yahoo article linked to above provides a good summary of other reasons for the fall of these rather large companies.

Does this signal the beginning of the end of traditional, brick and mortar retailors?

Brick and mortar is facing a multi-pronged assault. First is from online retailers and distributors – such as Amazon, Netflix, and iTunes. Second is an emerging threat from home- and local mini-manufacturing, such as desktop 3D printing. I am personally pretty bearish on this happening, although such systems have a chance to become mainstream if part quality can improve, costs continue to come down, and ability to work with multiple material improves. A more likely threat is from local mini-manufacturers, using technologies like 3D printing, waterjet cutting, and injection molding to make semi-custom products on demand. The advantage is of less machine down-time, distributing costs. Additionally, staff at a mini-manufacturer will be able to assist with designing, design selection, machine operation, and assembly.

Each of these distribution methods has its own benefits and shortcomings. Some are detailed in the table below.

Home manufacturing and mini-manufacturers are still in relative infancy, and it is hard to assess how great of threat they are to brick and mortar retail. Personally, I think the processes most often suggested for home or mini- manufacturing inherently have weaknesses related to quality and multi-material products. Additionally, the main value-add of this kind of manufacturing is customization, which I don’t think will have the mainstream appeal to justify higher costs over mass produced products for most instances. I have previously written in more detail on my opinions here.

For brick and mortar sales, a lot of value comes from being able to interact with the product. Look and fit are much easier to decide in person than through a web browser. Personal and expensive products like wedding rings and cars are things that people usually don’t buy without first interacting with. For these kinds of high-end products, knowledgeable sales staff is also valuable in choosing what the right purchase is for you. This contrasts with economy-minded products, where online reviews are often more helpful than sales associates.

Groceries are a retail category that have been slow to gain popularity in e-retail. This is due to a trade-off between time and choice. Being perishable, groceries are a category where 2-day shipping is insufficient. Further, people like to select the best produce, meats, and breads from those displayed at the store. In cases where quick access to the product is required, brick and mortar is the preferred type of retailer.

Apple stores have been lauded as an example of a great brick and mortar model, evidenced in part by Tesla motors modeling their showrooms after Apple stores. Both of these are an additional vertical in products and experiances that are already highly controlled by their respective companies. This allows the companies to control the entire experience for customers using their product by including selection and purchase along with usage. Additionally, there is a marketing and advertising aspect to having such a visible front for the respective products.

There are segments where brick and mortar seem unlikely to be able to compete with e-retailers. When the product is information, near instantaneous, free transfer, near zero inventory cost, and convenience make a clear case for digital distribution, such as Netflix for films and TV, iTunes for music and other media, and the shift from newspapers and magazines to web-based equivalents. The only option for brick and mortar in these industries may be to hope for a time machine, to travel back and time to develop or acquire digital distribution.

A second example is Dell, especially the company as it was around 2005. Dell has a great model of direct-order, where the buyer can semi-customize their purchase and also get great value compared to buying in-store. In some ways, the model lead to the cannibalization of the consumer computer hardware industry. Competition and commoditization lead to collapse of margins. It is said that in an efficient market, there is no money to be made. That is what has happened in computer hardware – it’s great for consumers but not for manufacturers.

Distribution through online sales reduces regional market inefficiencies. A customer in San Francisco no longer has to choose between local stores: they have access to wide areas of stores, subject only to tariffs and shipping costs. This lowers prices because a store in San Francisco is now competing with online stores all over the world, in addition to local stores.

A discussion on e-retail would be incomplete without mentioning Amazon; the giant in the space. Amazon wins due to massive diversity of products stocked, quick and cheap shipping, convenient use, and meaningful user reviews. Hidden to the purchaser, they have the infrastructure and distribution centers to make the experience work. With said infrastructure and its momentum, the burden is probably on any other retailor – web based or not – as to how they will beat Amazon.

How can this be done? I have a few ideas and suggestions

1. e-retail is only as fast as the postman

For things that are urgent or perishable, brick and mortar has an advantage to e-retail. While there is convenience to shopping from home, there is also convenience in buying something and getting it right away.

2. Quality sales advice

Brick and mortar stores should be much better than e-retail at consumer education, and it usually is for high-end and very personal products. There is no reason e-retail resources such as price comparisons and user/expert reviews can’t be as accessible in-store as online. There are apps and interactive displays that are starting on this, but I think there is further mileage.

However, the key advantage of brick and mortar should be knowledgeable and caring sales staff. People interact with products in a very personal way, and a salesperson should be much better suited to understand the user’s needs than a web based script or robot. Consumer information and marketing could be a key area for innovation for brick and mortar retail.

3. Beating e-retail on price is trench warfare

Competing on price alone is rarely a sustainable business model, and brick and mortar probably has an inherent disadvantage to online retail due to higher overhead. The lone advantage may be in shipping in bulk for brick and mortar, compared to by unit for e-retail. In this, a model like Costco may be able to remain competitive due to high volume, low number of products, and low-overhead operations.

I see a continued shift to e-retail from brick and mortar. The consequences of this could be quite far reaching. Not only will business be transferred from traditionally to online retailers, but there are implications for employment, international trade, a surplus of retail real estate, and decline to the cultural pastime of shopping. It’s still early to conclude on home and mini-manufacturing, but I don’t see these as being major threats in the space, and especially not in the near-term.

There exists a continued opportunity to disrupt the retail space. For example, mobile devices are still relatively greenfield for retail apps, without any dominant players. Further, the social aspect of shopping should not be overlooked. Perhaps this factor may lead to support brick and mortar, or some innovation improves the social aspect of online retail.

What do you think? I’m always interested to hear your comment in the bottom.

Do it like it’s the last time you ever will. Because eventually, it will be.

I mean this in a less morbid way than the phase is usually used, but also with seriousness. I put a lot of value on the experience of life, and, even aside from death, there is a real probability that things you take for granted today will change. For an infinite number of reasons, routines of today are not how they will be tomorrow.

Relationships and friendships begin, change, and end. Things break, are replaced, or are improved. Perspectives and experiences mold us into doing different things and experiencing them differently. People change geographies, and geographies change around people.

In technology industries, we get giddy over disruption. Disruption is opportunity. Disruption is change.

But change is also an end. For even the most mundane of tasks, it could be a cherished experience if it you knew it would be the last time you experienced it.

The trouble is that you only rarely know when you are doing something that it will be the last. The solution, I think, is to try for a frame of mind to experience everything like it’s your last.