In 2017 Hanson Robotics’ social humanoid robot Sophia was granted citizenship in Saudi Arabia. Sophia is very human-like in appearance: she can speak, move her face realistically, recognise other faces and even cracks a few jokes. It is strange, perhaps that the world’s first ‘female’ robot citizenship is granted in a country where many women still have to ask permission from their male relatives to leave the house. Despite this, is Sophia really the intelligent, conscious machine that she seems?

It can seem completely intuitive to us that we do indeed have free will. Surely, it can only have been my prerogative, and nothing else, that led me to buy that vintage Louis Theroux T-shirt for £35. There can’t be any pre-determined reason making that an inevitable decision, right? To clarify, free will extends to more than just clothing choices; free will is the ability to have many choices in front of us and to pick one without any external influence. It’s the ability to be the playwright of our own life, instead of the lead actor. However, when we look into the science behind these concepts, it seems that there really may be an explanation for everything we do and think.

“I thought you were going to study fine art,” says a former classmate. I wince in pain. I thought I was going to study fine art, too. Drawing has always been an escape for me. If I would not have moved away from Berlin before completing high-school, I would not be studying marine and freshwater biology at university. In the past, I have felt trapped between two different worlds, art and science.

Art can be seen as an expression of human imagination that creates an emotive response, and science is exploration and discovery of the human mind. Both surround our own perceptions of life from the mind itself, meaning there must be some comorbidity or connection. I believe we find art so impactful because of its innate interaction with science: the relationship between the two is a layered question, as the effects of experimental art and biological composure consistently overlap. We may be drawn to art itself through the paradox of familiarity and discovery.

DISCLAIMER: This guide does not encourage drug use and is for educational purposes, only. The supply, possession and consumption of any form of drugs can lead to serious legal consequences, emotional damage and death.

Man or machine? The question has become a steady part of almost every work of science fiction and is now slowly creeping into our everyday lives. For example, take the above artwork. With Artificial Intelligence evolving, the line between human and machine-made work is constantly blurring in every profession, and while these paintings are made by human, Seirin Hyung (@seirinhyung90), there are abundant examples of machine-made work that is almost functioning just as well, or better, than that of humans. Even in creative fields, AI has recently been used to write screenplays and even a chapter of Harry Potter. But while artists may be less delighted by a machine take over, fields like medicine see great benefit in the integration of AI into routine tasks if it has the potential to improve patient care.

Artificial Intelligence generally refers to machines that are able to replicate human intelligence in some form. This is usually concentrated on the areas of learning, reasoning, problem solving, perception, and understanding of language. AI has posed one of the most debated philosophical questions in the 21st Century, as many attempt to extrapolate the extent to which it is possible to replicate human intelligence. There are many fields in the realm of AI, with machine learning gaining the most attention in mainstream media – this is the science of getting computers to learn without being explicitly programmed. It is about feeding machines information that they then use to create predictive algorithms. It might then seem as if the machine is learning by itself in a way that may even feel sentient, although it is a process through detecting and analysing patterns. One method in this is deep learning, which uses layers of artificial neural networks and thereby simulating the way the human brain works. This method is especially used in fields of biology and medicine and has proven especially useful for one device.

[Machine Work: Pagodas]

GUM had the exclusive pleasure to speak with Awni Hannun, a PhD student in Computer Science at Stanford University, advised by Andrew Ng who is well known in the field of AI. Ng sees great opportunity at the intersection of healthcare and AI and believes that AI has the potential to disrupt many industries for the better, medicine being no exception. In order to make the field more accessible, he has launched free courses in machine learning and deep learning that are available online.

Hannun decided to pursue a career in machine learning after attending a university class on it. This particular project, produced in collaboration with iRhythm, combines machine learning with the commonly used electrocardiogram (ECG) device and has been shown to outperform cardiologists in arrhythmia detection using a new deep learning algorithm. Electrocardiograms are used to check the rhythm of the heart and its electrical activity. While traditionally done by attaching sensors to the skin, wearable ECG devices are recently new on the scene. They are used to detect heart arrhythmias by recording the heart and then being processed afterwards. With artificial intelligence, the process is refined and can detect more arrhythmias than the average cardiologist would. As the medical field already involves automatization in many processes, Hannun is optimistic about further usage. Automatization can sometimes create problems, but its normalisation in the medical industry will make it easier to introduce and implement new successful innovations in the future. If you ask Hannun, it won’t be long until this device hits the market – and not long until AI has become a standard component of the medical industry.

You almost can’t mention AI without conveying imagery of the robot, Sophia, and the Blade Runner-esque ethical questions that follow. But AI might have a bright, and much less dramatic, future in medicine, as Hannun thinks it will free up time for healthcare professionals and make medical care both cheaper and more accessible. It is important to see the benefit of AI in fields where it has the potential to save lives. No matter the stance, there is no doubt that this technology will become more integrated into our everyday lives. Being knowledgeable of AI and the progress in the field is therefore an important step. And while technology isn’t always an easy field to dive into, right now is a good time to start with many free resources made for beginners. Alongside Andrew Ng’s free online courses, the book “Life 3.0” by MIT Professor Max Tegmark covers the many areas of AI in an understandable way.

Research into Brain-Computer Interface (BCI) technology is one of the most exciting and rapidly-advancing frontiers in Bioengineering – ranging from efforts to reverse blindness, deafness, and paralysis, to detecting consciousness in patients suffering from ‘locked-in’ syndrome, where they are unable to move or communicate in any normal way. The brain will always capture the attention of the scientific community – and it is in Silicon Valley, a centre of innovation, entrepreneurship and technology, where Matt Angle is conducting his research in the field.

Glasgow is blessed with a rich history of science discovery, innovation and progress, led by some of the greatest scientific pioneers on record – both old and new. It is less a question of finding work to feature, and more a case of seeing how much we can proudly present within the constraints of the time available. One exciting thing to look forward to is an inaugural year-long series special on Global Surgery. Five billion people, more than two-thirds of the world’s population, do not have access to safe and affordable life-saving surgery; whether that’s for a difficult childbirth, a road traffic accident or for cancer. This theme was selected not only because of its nature as a global humanitarian imperative, but also because it spans every area of science and will require organised interdisciplinary and international effort to tackle… and most importantly – by us. Without further ado, thank you to our first global surgeon – Dr Catherine Juillard from San Francisco, California.

Almost every year the same topic hits the headlines – the serious and pressing issue of air pollution in cities across the UK. Since 2010, the UK has exceeded the EU nitrogen dioxide (NO2) pollution limit every year, often or not within the first few weeks. In London, this troubling pattern continued into 2017, this time breaching the limit in just 5 days.

In 1945, Jimmie G. was nineteen years old. He had his whole life ahead of him – the Second World War had just ended and Jimmie had his choice of what to do now. He could stay in the navy, where he had been thriving since he was conscripted at seventeen and keep working as a radio operator, or go to college. Years of peace stretched out ahead of him and Jimmie felt optimistic. He was young, bright, charming, totally in control of his future and flushed with the thrill of being part of a historic victory at such a young age. For now, though, he was talking to a doctor, although he wasn’t quite sure about what. But when the doctor held up a mirror, Jimmie’s easy confidence and light-hearted manner dissipated – what he saw was not a teenager, but the face of a man in his late forties, still handsome, but most definitely old, distinctly old. Jimmie was not dreaming, not in a nightmare. He was, in fact, forty-nine years old and it was 1975, and he was in the office of neurologist Oliver Sacks. Jimmie G. had severe amnesia. He remembered the events up to the year 1945 with exceptional clarity, but the rest of his life was a total blank, dissolved in the deep recesses of his mind. He was almost completely incapable of forming new memories, although occasionally he could recall recent events in a hazy way. Unable to remember his experiences of the present, Jimmie’s mind had latched on to what it could remember – his past – and created its image around him, even in the face of overwhelming evidence to the contrary. Jimmie was living in a world of his mind’s creation – something that, though it sounds extraordinary, is not at all unusual.

Albert Einstein has become an icon in the physics and astronomy communities, and one of his greatest contributions has been that most elusive and complex theory – ‘The Theory of Relativity’, which has predicted the presence of many things in the still ‘less-understood’ Universe. The most bizarre of these predictions is the existence of Gravitational Waves – which were recently discovered. Einstein stands correct once again! In order to understand what Gravitational Waves actually are, lets perform a small thought experiment. Imagine a very flexible rubber sheet that you can spread tight and taut. If you place a bowling ball over this sheet, it will create a dent in the surface. Now, if you place a tennis ball in nearby vicinity, it will create a comparatively smaller dent, and will begin to revolve around the bowling ball (try it at home, but be careful: bowling balls can be really heavy sometimes). Well, something similar to this scenario applies to our model of the Universe. Objects of heavy mass create higher levels of ‘distortion’ (a larger dent) in the fabric of space-time than lighter masses. These masses are in constant motion and when they accelerate, ripples of waves are formed in the space-time fabric and these are nothing but – The Gravitational Waves!

Over the last few years, enormous efforts have been made to detect these very weak waves in order to ‘see’ the universe from a gravitational perspective, as electromagnetic radiation offers a different view on the universe. This is because gravitational waves are thought to pass through all matter unhindered, which means that observing them will offer a new glance at the super massive objects or events happening in the Universe, or even the Universe’s very origin – The Big Bang.

Currently, LIGO, the earth based detector, has been in the news ever since it confirmed that they have finally detected these gravitational waves. It has been hailed as the discovery of the century (I might be going out of the limb here, but it is huge!). The main problem faced by the land based gravitational wave detectors like LIGO is the background noise from Earth, caused by the likes of waves, human interference, mining and construction, and even seismic activity. These interferences limit the sensitivity of the detectors on earth and so, in order to conduct a more effective search for gravitational waves, the concept of space-based detectors was born, which ultimately led to creation of Evolved Laser Interferometer Space Antenna (eLISA). It is the first space-bound gravitational wave detector set to be launched by 2034 by the European Space Agency (ESA). It is supposed to detect thousands of sources by using gravitational waves and, since it’s in space, the detector can have gigantically long arms so as to increase the sensitivity further.

It has been designed to form an equilateral triangle made up of three separate space-crafts, with one of them functioning as a “mother” craft and two others as “daughters”. They are supposed to have an arm length of one million kilometers. The entire detector will take up an orbit around the Sun, following Earth by twenty degrees. This first massive detector will basically measure these waves by checking the distortion in the space-time contained within the arms of the detectors (millions of kilometers). The measurement of the distortions in space-time would be carried out using interferometer methods – the use of lasers at every arm of the triangle.

Special algorithms have been developed by NASA and ESA over the decades to identify and classify tens of thousands of different sources detected by eLISA over time. Results have shown that sources can be successfully classified every time and this is being currently tested in the LISA pathfinder, a space-craft which was launched in December 2015 to test the techniques developed for eLISA.

By 2030, knowledge of Universe will have greatly changed and the addition of a new detector would assist this process enormously. Currently electromagnetic radiation detectors can help us see the Universe, but the window they give us is somewhat limited. However, with the advent of gravitational wave detectors, our ability to ‘hear’ the Universe will increase manifold and give us a valuable insight to the origins of… well, everything. Astronomical science would change drastically, and we’d even come to understand more about the mysterious ‘dark’ Universe, all due to the gravitational waves. Finding them was just the first step! By Prarthana Desai

The general consensus is that we’re all very busy, all the time. Students get a hard time for being wasters but in reality, the hours of any given day seem to slip away as easy as our student loans. That’s where Tech Meets Paper steps in. Designed for the Type A stereotype, Tech Meets Paper is a Scottish start up attempting to reorganise your work life.

Branded as ‘Stationary that talks’, Tech Meets Paper is a line of beautiful stationary as well as an augmented reality app. The app learns to react to your hectic lifestyle, prompting you to take breaks as well as undertaking daily tasks. Think of it as a personal assistant; it can book you a taxi, remind you of a meeting or simply send you an inspirational message to get you through the day.

Tech Meets Paper understands that we are on our phones 24/7 and that we all have a lot of commitments in our lives. The multi-channel app is a unique way to deal with modern pressures most students face.

Tech Meets Paper is a Scottish, local, women-run start up that is currently crowd funding on indiegogo. Kirsty Mac and Rachel Ferguson are the brains behind the operation, with the app currently being developed in Aberdeen. It’s an exciting triumph for both women in business and technology as well as Scottish enterprise.

We all know that we are smarter than Monkeys and Chimpanzees and Mandrills. That has been established for a lot of years, probably well over a 100 of them.

Now though scientists and zoo keepers are starting to realise that not only can Monkeys break bread, write prose and build advanced children’s jigsaw puzzles but they can also speak. No, I know what you’re thinking and you’re incorrect, they don’t use words but various Monkey noises. They don’t use any words, the majority of their communications based on the noises, which makes the majority of their conversations just habbledash and flibbley-pop but it’s a start for them.

Scientists were baffled that our closest primate relative; the Chimpanzee, is not as vocal as the Marmosets are with the Chimpanzee preferring to use more hand signals, destroying the long established myth that Chimpanzees are clever. Anyone knows that if the human race was to talk in only hand signals, we’d never know what was around corners, what was for tea or be able to give long distance lorry drivers abuse on short wave radio systems.

I would still stock up on paper and dig a hole in your back garden though as when the Marmosets take over they will want your fealty in writing, just so they can show off to the chimps that they can converse on both a verbal and non-verbal level, just to really get it up them.

In other news, scientists in America have most likely decided that it probably rains diamonds on Saturn, they revealed to a perhaps full house at the Dallas conference center. The biggest diamonds are definitely a centimeter in diameter when they fall, similar to a diamond you might could possibly perhaps find on a potential wedding ring but of course there are no rings attached to them on Saturn. When asked by skeptical idiots at the “Big Barbecue Wing Dingz and Science Weekender Marathon” in the Dallas conference center how the scientists knew that it perchance rained diamonds on Saturn as there was no possible way for them to go and observe the occurrence on Saturn they replied; “It all boils down to the chemistry. And we think we’re pretty certain.” Another fascinating week in the world of Science I’m particularly sure you could and might maybe agree. That’s all from me this week! Keep the faith science pioneers and look to the stars.

From security guard working the graveyard shift to neuroscience blogger snatched up by the Guardian, Mo Costandi tells GUM how all of it was an accident, and how his first book will certainly not be his last.

It’s Friday lunchtime, sitting at my kitchen table an icon pops up on my laptop screen, missed call: Mo Costandi. I panic, redial, and we connect over cyberspace and greet each other through the fourth wall of skype. I had been an avid reader of his blog Neurophilosophy for some time now, happy to see neuroscience propelled to the front page of scientific endeavours. Interestingly, the interview could not have come at a better time, with the Obama Administration launching a project to map humanities greatest asset, dubbed BRAIN (Brain Research through Advancing Innovative Neurotechnologies). Although the enigmatic nature of the brain has long intrigued scientists, it is only in the last forty years with the invention of the MRI that we been able to really delve into it’s secret world. Indeed, George H.W. Bush dubbed the 90’s ‘the decade of the brain’- or was that George W. Bush? “I already feel a bit embarrassed” Mo confesses, “in the introduction of the book that I posted up on Neurophilosophy I wrote that it was by George W. Bush rather than his father.” The irony of the blunder had me laughing, but Mo seemed genuinely distressed with the the slip; and with his first book, 50 Human Brain Ideas You Really Need to Know, due for release, pressure is high to succeed.

“Its coming out in 3 months” he explains, “it all happened very quickly, I spent ten weeks writing it over summer and submitted the manuscripts. January was spent re-reading the page proofs.” He relays the information with a precision that exposes the rigorous, sometimes tedious, but ultimately essential peer review system prevalent in the scientific community. “Probably 2/3rds of the book was written off the top of my head, then of course I had to double check whether I’d got all the facts straight and that I’d used as many up to date papers in my research as possible.” The book attempts, in fifty chapters, to cover the fifty most important discoveries of neuroscience in the last one hundred years; no easy feat considering the leaps and bounds the field has taken in the last two decades.

Yet for half a century it was the simplest concept that divided the scientific world; cell theory, the idea that all living organisms consist of cells. It is clear to see the dedication Mo holds for his subject, when before it seemed as if he were reciting facts, now his face becomes truly animated; “So there were two groups basically, one that said the brain must consist of cells like all other living things, the other group said well no, we can’t see them under the microscope, brain tissue must consist of a continuous network.” We’ve all heard of neurons, the fundamental building block of the nervous system, but few know of glial cells. “Glia seem to be as important as neurons for the brain’s ability to process information, and this is something that most neuroscientists still don’t appreciate” Mo explains, “we’ve basically ignored glial cells for the whole history of neuroscience!” It seems that not only are glial cells able to form connections of their own, they also dictate the way neurons communicate with each other. This is the kind of discovery that motivates Mo’s writing, Neurophilosophy started seven years ago: born out of necessity it is now his main focus. “I basically got kicked off my PHD two years into it and I ended up working as a security guard. I was doing twelve hour shifts just sitting on my arse all day- or all night- with nothing to do, so I just set up the blog to pass the time and people started reading it.”

There’s a new legislation proposal under way which strips us from any pseudo-privacy that we still had on the internet. Actually, it’s not just Britain and it’s not just the web. The shops are in on us, too! Andrew Gallacher analyses problems that are virtual by nature, but very real in their consequences.

It’s the good old ‘Three Strikes’ rule (governments love those, don’t ask me why): get caught sharing illegal files three times and your internet connection will be cut off. Naturally, there are a whole host of people opposed to this on the grounds of privacy and civil liberties and all those pesky things that governments do their best to ignore. And you would have to say that these people have a point. What if you simply happen to be sharing a connection with the file sharer? Are we now in the business of punishing the innocent along with the guilty? More than this, it seems to set the precedent that an internet connection is a privilege that the government may revoke at will. It’s a law that pretty much no-one in their right minds wants and which was, entirely coincidentally (?), proposed shortly after Peter Mandelson had a meeting with a major lobbyist for the media and entertainment industry.

This is hardly the first time that there has been cause for concern over privacy and civil liberties on the internet, but these issues appear to be surfacing with greater and greater frequency. In the US, the Cybersecurity Act of 2009 is under discussion which, amongst other things, grants the President the right to disconnect any network, private or public, which he sees fit. Back in the UK again and it is already a requirement for internet service providers (ISPs) to store every e-mail sent or received for up to a year. Plans are being considered to create a central government database to store this information. In France there is a proposal to create an internet monitoring agency that would have the power to cut off internet access to illegal file sharers for up to a year and to have the file sharers names added permanently to a blacklist. This is just a sample of the kind of issues that have cropped up in governments world wide during the last 12 months. If we include private industries, the list gets even longer. For instance, there was the Phorm controversy, wherein several ISPs agreed to sell details of their customers browsing habits to a marketing company. And Google’s plans to create an online library have brought up a number of problems around copyright and privacy (specifically the ability of private and public institutions to access user data).

You may rest assured that the same people fighting the good fight against Mandelson and his ‘Three Strikes’ law are at work on all these issues. And while students are naturally inclined to agree with the rebellious lot on these developments, we must ask ourselves a question: Are they actually right? Might not, on some level, on some occasion the evil governments and shady corporations have a point? What does privacy even mean in an environment like the internet? The internet is probably the least anonymous, least private space you will ever inhabit. You would, in many respects, enjoy more privacy walking naked through the city centre than you do just by checking your e-mail. The internet is an owned space, more clearly and completely than the actual physical world ever could be. It is at heart a collection of private networks. Every time you transmit and receive information it is passed over those networks and unless you use encryption that information is entirely open to network owners. That they don’t inspect the contents of data packets is taken largely on trust.

Take the aforementioned case of ISPs selling their users’ browsing history. Disregarding the means used to obtain the information, which was certainly a part of the controversy, your ISP has to know what sites you are visiting in order for you to visit them. So they’re not stealing this information, they have to have it. The issue is ownership. The argument is that you own the knowledge of what sites you have visited and thus only you have the right to sell it. Visiting an internet site can thus be defined as a private interaction in much the same way that you making a phone call is a private interaction. At the same time, supermarkets record their customers spending patterns without complaint from anybody and it seems to be an analogous situation. In fact, one of the main purposes of supermarket loyalty cards is to track customer buying habits for marketing purposes. So what is your browsing history – a private conversation or a trip to the shops?

Or, coming back to the heart of the problem, what of the government requiring our e-mails to be stored for up to a year. Reprehensible, yet the UK has more CCTV cameras per head of population than any other country and there is little public outcry. If we can be monitored so completely in our day to day activities it is hardly surprising the government feels comfortable extending that to online ones. Since we don’t seem to have a consistently applied notion of privacy in any aspect of our society, how can it be anything other than quixotic to attempt to apply it to the internet? Government reaction to illegal file sharing is unnecessarily draconian, but it remains a problem and will continue to be one until either IP laws change or the actions of a great number of internet users do. So until societies catch up with the reality of the online world, it’s best not to go online in search for privacy and anonymity.

The new Polaroid PoGo Instant printer has now been released to the masses and to celebrate the launch, the guys at Polaroid kindly posted a shiny new PoGo to the GUM office for a test-drive. Being the fine folk that we are at GUM we put on our Jeremy Clarkson hats and hit the streets to see what this bad boy could do.