October 15th, 2018
Posted by adminUncategorized
0 thoughts on “The Law Without Walls Experience: A European Law School student participating in the worlds biggest law and technology competition”

By Pierre Ferran

Pierre is a 3rd year European Law Student at Maastricht University. He is particularly interested in LawTech, as he also is a software developer in this free time. In 2018, he participated in the innovative Law Without Walls program, where, with his team, he focused on streamlining low-risk contract lifecycle for in-house legal teams. He now works for the legal engineering firm Wavelength as a Legal Engineer Intern.

What is LWOW?

Law without Walls is a giant Hackathon. For four months, we worked in a team to develop a solution solving a legal problem of your choice, ranging from corporate workflows to social justice, all topics ultimately revolving around a mixture of technology, law and entrepreneurship. I was the first ever student from Maastricht University to be sent to LWOW, after applying and being the one lucky student to be selected. The application process takes place in three stages: you first have to apply to the law faculty, by sending a letter of motivation, as well as all the application material required by LWOW. If selected, the Law Faculty will issue a recommendation for you to LWOW. You can then send in your application to the LWOW staff. If the LWOW staff selects you, you will then be interviewed through Skype. Should the interview be successful, you will be invited to the kickoff, where everything begins.

LWOW O Kick off

The LWOW O Kick off took place in St Gallen, Switzerland. No preparation was required aside from taking an online personality test. I knew nothing about what to expect, who was going to be in my team, or really anyone there for that matter. The organizers will assign you a hotel room that you will share with another participant, allowing you to meet another person from the program on the first day. I shared a room with Angelo Massagli, a student from Miami Law School, where LWOW originates. I arrived in St Gallen on the Friday evening, with the first day of the kick off starting at 9am on the Saturday, running the whole day. At breakfast, it became apparent how diverse the LWOW student were. While we are used to working in an international environment here in Maastricht, that environment is very European centred. LWOW students come from around the world, including many from South America. Upon arriving at the location of the kick off, the first thing they have you do is pose for a Polaroid picture, which you then have to pin to a board, alongside the other pictures of your team members, who have been doing the same.

With the help of the pinned pictures, your next task is to find your team amongst the crowd. This is the first time you get to find out who is in your team, so let me introduce you to them: the team is composed of three students, three team leaders, and two mentors. The two other students in my team were Valentina and Nicolas. Valentina is a Chilean law student, studying at the Pontificia Universidad Católica de Chile in Santiago, while Nicolas is also a law student, studying in Bogotà, Colombia. The three team leaders all originated from the UK, with two of them, Chrissy and Adam, working at Pinsent Masons LLP, the law firm sponsoring our Team. Chrissy is a Senior Associate at Pinsent, while Adam is a data scientist. The other team leader was Peter, a commercial solicitor working in-house at Royal London, an insurance firm in London. Finally, our experienced mentors were Marsha, a retired American attorney, and Chris, a French-American attorney, who sits on so many boards he probably has a say on what you eat for breakfast without you knowing it. I am not going to spoil the entire kick off for you, but you can expect amazing speakers with insane credentials, great team bonding and fun activities. The first day pumps you up, and you will become very excited for the second and last day. It is worth noting that the people you get to talk to during the day are amazing, all dynamic and with interesting background. Most importantly, all these people are all very accessible and open to conversation. To give you an example, the assistant general counsel of Microsoft was in the room, and he insisted on being called “Steve”.

The second day of the kick off is where you get to do stuff. After intensive coaching, you and your team get to do a mini hackathon. We were assigned the theme of mental wellbeing of law students. We got to work and designed a small app that would allow students to anonymously vent off their frustration and get feedback on how many students are also experiencing difficulties in their local area. The project was called Stressbot, and its business model was based on the idea that universities could pay to access anonymized data about the wellbeing of students in their local area. This idea was the runner-up as the most viable project amongst all projects presented.

These two days are very intense but will leave you full of energy and optimism for the real project you will have to build over the next few weeks with your team. Of course, the final day is closed off with complimentary drinks, as a nice way to say goodbye.

LWOW O itself: “an experience”

The process of developing your idea is very well structured by the LWOW Team. The first thing you do coming back from the kick off is schedule the many meetings you will have between your team and the LWOW staff. The staff gave us set targets to meet every week, much like PBL, with the team working at its own pace. We decided to have a weekly meeting with our team leaders, with the students working during the rest of the week and reporting at the meeting. This is a demanding schedule, considering you also must prepare your regular tutorials!

Initially, we planned to focus on the gender pay gap, and using technology extract pay data from employment contracts to render out simple statistics. We identified this problem as it was a recurrent topic on the news at the time. Our team leaders also pointed out that a new UK regulation was coming into force at the time, which requires businesses to disclose gender pay gap statistics.

You quickly learn that LWOW is a lot of back and forth, and that’s a good thing! As inexperienced students, this back and forth allows you to try out different areas, question professionals, and figure out what problem is actually worth solving. The goal the LWOW staff sets for you are helpful for this. The LWOW meetings are like a hardcore drilling session (for those of you accustomed to the French education system: they are something like a khôlle de prépa). You have a set format to follow, that always includes a slide deck. You then present your work and progress using your slide deck. Often, the feedback will seem quite harsh, being strong but constructive about your performance, but this harsh feedback is only meant to push you further and further, to really make you go above and beyond. Despite the emotional rollercoaster these may cause, they bloody work.

The quest for perfection in LWOW allows you to learn fast, develop many skillset, and give you a good introduction to the workplace of tomorrow. Technology is all around in LWOW, we might be young millennials drowned in it, but we still have a bit to learn. LWOW will teach you how to use technology in a business setting efficiently.

Back to our project, we had to start investigating whether the problem of gender pay gap statistic was meant to be solved. We used our excellent mentor’s and team leader’s connections to interview working professionals, questioning them about how they planned to get ready for the upcoming regulatory deadline to publish gender pay gap statistics. All of these professionals were extremely helpful and kindly offered us some time from their busy schedule. It truly was an enriching experience to discover how potential “clients” see a solution to their problem, and how they currently deal with the matter.

We quickly found out through the interviews that our initial problem had already been solved outside the legal world. Human Resources system already provided a way to quickly access gender pay gap statistics, and thus, after receiving feedback from both our team leaders and the LWOW staff, decided to pick another problem.

At this point, we were already a bit behind on the LWOW schedule and needed to catch up. We went back to basics and started questioning our team leaders about their issues at work. Peter quickly started complaining about an issue he faces every day. Again, Peter is a busy in-house solicitor, every day he has to prepare important and valuable contracts and issues for his company, but he also has to answer every single legal question any employee has in his company. Thus, every day someone comes to Peter and asks him questions similar to “I am negotiating a contract with a supplier, but they want to remove the data protection clause from the NDA, can I accept that”, to which Peter always replies the same thing: “Are they handling data? If not, then yes you can remove it”. Obviously, Peter gets quickly fed up with these small repetitive questions that are relatively unimportant, because they concern low value, risk-free agreements. So how could we remove the burden of answering these questions? That’s what we set ourselves on doing, and we started the process again of interviewing professionals and built up a solution to this problem. The more you work on your project, the more you fall in love with it. It’s with this energy that you create a solution, a business plan, and of course, a pitch. The end goal of LWOW is to pitch your idea at ConPosium, as if you were trying to sell it to everyone in the room.

We called our solution Satori. Satori is a Buddhist concept: there cannot be Zen without Satori. That is what we aimed to bring to Peter’s life, zen. But there is no better way to explain what Satori is than by showing you our pitch, you can watch it here.

To make sure our project would stand out, I decided to use my tech skills to build a prototype of Satori, our pitch features a demo of the prototype. It is important to note that no particular tech skills are required to make a great pitch at LWOW, however, it also is a great opportunity to learn! Wavelength.law, an innovative law firm that specializes in Legal Engineering (you should check them out) sponsors LWOW by sending Felix Schulte-Strathaus, the legal tech officer of the program. He is there to help you map out your project, as well as potentially pointing you in the right direction to make a prototype of your project.

ConPonsium: LWOW Vice

ConPosium is it. It is the moment you have been working so hard for. Plus, it takes places in Miami. Yea, you get to go to Miami for “business”. As you have seen in our pitch, every team gets to present its project during the two days of the event, and in the end, the best projects are given an award.

The room to which you pitch is filled with brilliant people, from everywhere around the world, and with many different backgrounds. It truly is the people that make LWOW an amazing experience. You will meet dynamic like-minded students, who will become your friends, as well as enthusiastic professionals working everywhere from small legal practices to massive companies such as Deloitte or Microsoft. The connections you make there are invaluable, and the experience itself beyond enriching.

It’s hard to properly describe the LWOW experience, so I will leave you with this. If you are interested in legal technology, if want to know how the legal profession will work tomorrow, or if you are a repressed entrepreneur doing a law degree, then LWOW is for you. I must thank everyone at LWOW, and in particular, Erika Pagano, Michelle DeStefano, Catalina Goanta and Felix Schulte-Strathaus, as well as our team Sponsors Pinsent Masons LLP and Royal London Insurance for giving our team this amazing opportunity, which I hope you will enjoy one day as well.

Written by Doris Bogunović who shares regular blog posts with us on the role of the European institutions working on issues related to technology. She has a legal background, a keen interest in technology as well as experience with both the Court of Justice of the European Union and the European Parliament.

The current discussion that Europe is led by technocrats might be somewhat unjustified, at least when it comes to the European Parliament. Members of the parliament are often not experts in certain areas and they work with consulting bodies to help them in the decision making and policy forming processes. The most usual way an MEP can keep himself informed is through a well set up team at his/her cabinet. Every MEP has an office staff usually comprised of permanent assistants, contractual agents and interns. Since every MEP is a member of different delegations, conferences, committees or sub-committees (in which he/she represents the interest of the Member State of his/her election), his/her staff is in charge of research on all of the topics the MEP is working on within the mentioned bodies. But where else do MEPs get their info when it comes to technology specifically? The answer is from a consulting body called STOA.

STOA, the European Parliament Office for Scientific and Technological Option Assessment was officially launched in March 1987. By 2003, the Office had its set of rules and today it serves the European Parliament (EP) on a permanent basis, carrying out the important task of providing independent and impartial information to the Parliament’s Committees and other parliamentary bodies, regarding science and technology – specifically, researching scientific and technological developments, opportunities, as well as their risks and implications.

STOA’s activities consist of conducting Technology Assessment and Scientific Foresight projects and by organising workshops, expert discussions and visits to scientific and technical institutions. Any EP Member or EP body may submit a proposal to the STOA Panel for STOA activities to be carried out.

There are 25 members of the STOA Panel appointed for a renewable two-and-a-half-year period by nine EP Committees:

Committee on Industry, Research and Energy/AGRI

Committee on Employment and Social Affairs/CULT

Committee on the Environment, Public Health and Food Safety/EMPL

Committee on the Internal Market and Consumer Protection/ENVI

Committee on Transport and Tourism/IMCO

Committee on Agriculture/ITRE

Committee on Legal Affairs/JURI

Committee on Culture and Education/LIBE

Committee on Civil Liberties, Justice and Home Affairs/TRAN)

It’s envisioned they meet at least six times a year.

The thematic priority areas of STOA are eco-efficient transport and modern energy solutions, sustainable management of natural resources, potential and challenges of the internet, health and new technologies in the life sciences and science policy, communication and global networking.

The Vice-President of the European Parliament responsible for STOA is a Spanish politician Ramón Luis Valcárcel Siso. MEP Valcárcel Siso is a member of the EPP group (Christian Democrats). Not only an ardent supporter of digitalisation but also a quite down to Earth opponent of misogynists and hate speech, even when it comes to conservative forces.

Furthermore, STOA cooperates with other parliamentary bodies such as the European Parliamentary Technology Assessment (EPTA). EPTA is a network of technology assessment (TA) institutions which was organised in 1990 to advise parliaments across Europe. Today, it has 13 full members, one of them being STOA. EPTA organises annual conferences and promotes cooperation between parliamentary bodies. EPTA members (individual institutions like STOA) are permanent consultants to the parliamentary bodies who are helping with the decision making process by carrying out technology assessment studies on behalf of parliaments.

EPTA’s goal is to ‘provide impartial and high quality accounts and reports of developments in issues such as for example bioethics and biotechnology, public health, environment and energy, ICTs, and R&D policy’. A good example of EPTA’s work is the Mobility Report from 2017 about mobility pricing in different countries and their future plans in tackling mobility issues submitted by ETSA members.

What do you think, is Europe full of technocrats? How does it work in your country? Should parliamentarians be specialists in certain areas or just well informed decision makers? How objective do you think is the information delivered to them? Are the reports they draft available to the public as well?

Written by Caroline Calomme, Technolawgeeks’ co-founder and product manager for connected car services at Be-Mobile. This blog post is based on a speech given at ‘L’intelligence artificielle au coeur de l’entreprise’ organized by CMS Belgium in October 2017 and what I’ve learned from my wonderful new colleagues.

We’ve always feared disruptive inventions in the field of transport. That’s nothing new. The first trains? People believed that the journey would melt their bodies. The passengers wouldn’t be able to breath at such a high speed and their eyes would be damaged. The train rides could cause instant insanity. Even worse, the trains would make women’s uteruses fly out. It’s not the only mode of transport fueling the public’s anxiety. In the United Kingdom, the first cars were banned from travelling faster than 2 mph (3.2 km/h) in the city. Even bicycles were considered extremely dangerous. Those who dared to try this engine of death ran the risk of suffering from a terrible medical condition: the bicycle face. The speed and “the unconscious effort to maintain one’s balance” would leave you disfigured and scarred for life. In comparison to this, our reactions to autonomous cars seem almost reasonable.

Do we really understand the technology though? To many, artificial intelligence and mobility are synonymous for self-driving vehicles. It’s the first picture that comes to mind. Yet we’re only at the very start with not-so-glamourous – although very practical – applications such as parking assistance, speed adaptation or lane centering. Autonomous vehicles fascinate us but we shouldn’t confuse the potential of this technology with the reality. It’s not because they’re featured in movies and TV shows that we’ve gotten that far. You’ll still have to wait a while before taking a nap in your car after a long day at work. If you don’t believe me, have a look at those articles in Wired, Forbes, TechCrunch or the Huffington Post.

Of course, this doesn’t mean that we shouldn’t start reflecting on the policy implications (see also Doryane’s post). When they’re introduced on the market, self-driving cars will disrupt insurance schemes as we know them and raise serious ethical concerns. Do we want to build cars which obey every single rule on the road? Or shall we program them to know when it’s best not to follow rules to the letter? In the first scenario, we’ll need to clarify the hierarchies between the obligations to avoid situations where it’s impossible to obey the law without breaking another rule (yes, this is an actual possibility since the driving code is still written by fallible humans). In the second scenario where the vehicles are taught to think like us, there’s a chance that they’ll also do the math: costs of the fine < benefits from driving faster…There’s still a lot to think about!

Nonetheless, we tend to focus so much on the vehicles that we forget the infrastructure. That’s unfortunate because that’s where we’ll see technological advances happening in the near future. Cars don’t interact only with one another, but also with road signs, traffic lights and much more (check out the European Commission’s website for more information on vehicle-to-infrastructure policies). Here’s a very concrete example. Today, we’ve dynamic boards on the road. Sometimes, they indicate a new maximum speed, due to road works for instance. Instead of only sending information to the cars, the boards can also receive information from them (for the techies among you, it’s of course a figure of speech). Imagine the next step: displaying the ideal speed at which cars should be driving based on the current traffic flow. It decreases the probability of accidents caused by hurried commuters and ensures that drivers do not slow down for no reason.

True, we don’t need the infrastructure, we have apps. But let’s not forget that almost 20% of the population is over 65. True, we can’t always rely on the drivers following the advice. It’s a fact that until we remove the drivers from the equation by sending the information directly to an intelligent vehicle, we’ll unfortunately need to count on common sense (this video on how ghost jams start illustrate why it doesn’t work). On the bright side, there’s already a lot of data to learn from: the correlation between the number of trucks and the decrease in speed, the exits and times where traffic slows down the most, the impact of the weather, etc. If we took a step back, we’d realize that artificial intelligence can also help us reduce traffic jams even before controlling the vehicles (some inspiration here). While a vehicle parking itself in a crowded city when you go shopping has its perks, this tangible progress would already have a great impact, as anyone who needs to drive to work can probably attest.

But let’s get back to the key role of the infrastructure. Have you ever waited in front of a red traffic light at a crossroad where all the other traffic lights also happen to be red? And where the light is green for pedestrians although they’re nowhere near the crossroad? If only you could let the traffic light know this makes no sense…That wouldn’t be very practical because every driver would send requests and ask for priority. But what about vehicles transporting dangerous goods? Ambulances? Public buses which are already 10 minutes behind schedule? Even better, the traffic light could detect that 10 vehicles are waiting in one direction while there’s only one vehicle in the other direction and could take this into account. It could also recognize an elderly or disabled person who needs a little bit more time to cross (on that note, ‘there’s an app for that’).

It’s time to shift the policy and legal debate to the real world. While I admire the willingness not to be outrun by technology once again, policymakers and legal experts might be overlooking fundamental advances that are a lot easier to implement than self-driving vehicles and also raise questions of liability, cybersecurity, public procurement, intellectual property, competition law, data protection, etc.

Written by Kristopher Badurek, a Bachelor student of Maastricht University’s European Law School. He is a tech enthusiast who puts emphasis on interdisciplinarity and self-development.

Massively Multiplayer Online (MMO) games keep maintaining their popularity, with new games spawning more frequently than ever. And yet, certain problems have always marred players of such games. Many players at least once logged into their accounts to find their beloved equipment gone. At times like this, many questions run through the heads of troubled players. One of them is, did I actually own that equipment?

Generally, all objects useable by the players, be it swords, blocks or currency, as well as avatars themselves, are considered under an umbrella term of virtual property. They are believed to be more than simple code, mostly due to their perceived fungibility and the ability to create a sense of personal attachment. The term virtual property seems to imply that it is a type of property. From the lay perspective, it can indeed be perceived as property. It can be possessed, used and enjoyed by the players. However, the situation is not as simple as it may initially seem.

Legally speaking, virtual property does resemble property to some extent. In his heavily influential work, Joshua Fairfield bases the similarity to real property on three factors: rivalrousness, persistence and interconnectivity. Both in virtual and real life, owning a piece of property comes with the possibility of excluding others from its use. Whether it’s a real house or one in Second Life, its owner may invite others and decline others’ visits as he deems fit. Both types of property are also persistent. Like its real counterpart, a virtual house will not disappear without a trace for no reason, even after the player turns off his computer. It is still there, somewhere, waiting for its owner to return. Lastly, both virtual and real property are interconnected. Upon inviting a friend to a house, be it real or virtual, both the player and his friend will be able to experience the same objects in the same place, even despite the friend not owning the object.

However, unlike real property, all virtual property is burdened with certain limitations. It is those limitations that effectively block its legal recognition as property. They stem from terms of service, which every player must accept before entering a virtual world. This so-called End-User License Agreement (EULA) controls virtually every action of the user. Most of these licenses pre-emptively require a waiver of any potential right to any virtual property that the player may amass over the time spent in game. It does not change the fact that the player is allowed to use, enjoy, and sometimes even profit from his virtual property, but at the end of the day, those objects are still factually owned by the virtual world’s developers. This means that virtual property is simply not owned by the player, and any rights to it the player may have are derived from the license granted by the developers.

Written by Merle Temme, a European Law School alumna whose paper on algorithmic transparency was nominated for the European Data Protection Law Review (EDPL) Young Scholars Award.

Algorithms (sequences of instructions telling a computer what to do) are becoming deeply entrenched in contemporary societies. When designed well, they are incredibly useful tools in accomplishing a great variety of tasks, simplifying human life in many different ways. Their use is not, however, uncontroversial, especially when algorithms are being used in automated decision-making (ADM) and therefore make decisions that have potentially life-changing consequences for individuals without any (or only marginal) human intervention.

It is by now well known that, like humans, algorithms can carry implicit biases and may well deliver discriminatory results. Remedies do exist – for instance, having developers factor in positive social values (like fair and equal treatment) already at the design stage of the algorithm and, in case of a violation of these values, enforcing them through anti-discrimination legislation. Rendering a system both fair and efficient, however, requires extra care and attention and such an effort will cost time and money. Operators of ADM may therefore easily be tempted to rely on less well-designed – albeit cheaper – ADM systems.

The European Union legislature decided to tackle this problem last year by regulating the way in which the data forming the basis of the algorithm’s decision is being processed. The EU’s overhaul of its data protection regime, the General Data Protection Regulation (GDPR), will have to be applied by the Member States from May 2018 onwards. The GDPR provides for rules such as transparency requirements, which are applicable to human and automatic decision-making alike, but also features special provisions which are pertinent to ADM alone. Not only is this intended to address the abovementioned accountability issue; greater transparency is also supposed to help human subjects of ADM to better understand what factors underpin the decisions that affect them and how the system can be held accountable.

The GDPR is praised as ambitious and designed to bring about substantial change, aiming at making Europe ‘fit for the digital age’, but has at the same time been criticised for being vague and ambiguous – a hybrid legal instrument mixing many aspects of a directive and a regulation. In name it is a regulation, directly applicable across the board in the EU, albeit one that leaves many aspects to be regulated by the Member States; a feature typical of European directives.

This ambiguity has spawned an interesting debate among researchers on how the GDPR’s transparency requirements are to be interpreted in so far as ADM is concerned. Goodman & Flaxman – in a rather brief paper – entered the scene in summer 2016 by identifying a ‘right to explanation’ as the most important expression of algorithmic transparency in the GDPR, without, however, providing a strong line of argumentation for this statement or even identifying a legal basis for such a right. They identify the right to explanation as a more fully-fledged version of the right established by the Data Protection Directive of 1995 (which from May onwards will be superseded by the GDPR) and argue first, that an algorithm can ‘only be explained if the trained model can be articulated and understood by a human’. Secondly, they hold that any adequate explanation would, at a minimum, ‘provide an account of how input features relate to predictions, allowing one to answer questions such as: Is the model more or less likely to recommend a loan if the applicant is a minority? Which features play the largest role in prediction?’.

Wachter, Mittelstadt & Floridi took up the gauntlet and argued, on the basis of the structure of the regulation and its drafting history, that the evidence for a right to explanation is inconclusive. Instead, they propose an alternative ‘right to be informed’ about certain aspects of the decision-making process (e.g. the purpose and legal basis of the processing). First, they claim that even if a right to explanation existed, restrictions and carve-outs in the GDPR would render its field of application very limited. Secondly, they set out the central point of their paper, the degree to which ADM can be explained in the first place: Wachter et al. make a distinction between how general or specific the explanation could be and at what point in time it would take place, only to conclude that the sole possible interpretation would be a very general explanation of ‘system functionality’ (what they name the right to be informed).

A very recent paper by Selbst & Powles, however, describes Wachter et al.’s analysis as an ‘overreaction’ to Goodman & Flaxman’s paper that ‘distorts the debate’. Their central point of critique is Wachter et al.’s analytical framework, namely the model they use to explain the degree to which the inner workings of ADM can be explained. According to Selbst & Powles, that model is nonsensical and rooted in ‘a lack of technological understanding’. Interestingly, most of their paper is focused on debunking that model and a detailed explanation of why it does not correspond to computer programming reality. Only then do they turn to the legal text itself. By applying a holistic method of interpretation, they conclude that the regulation, requiring ‘meaningful information about the logic involved’ (in an automated decision), must contain ‘something like’ a right to explanation in order to enable the data subject to exercise her rights under the GDPR and human rights law.

The way this will play out in practice will become clear once the GDPR becomes applicable in a few months and European courts will have the opportunity to weigh in and decide how to interpret it in the disputes that will be laid before them. The development of this debate so far – from purely legal arguments (Goodman & Flaxman) to a more technical analysis (Wachter, Mittelstadt & Floridi) and the rebuttal of the latter (Selbst & Powles) – is, however, remarkable: it indicates that on a topic as complex as algorithmic transparency, legal knowledge is not enough anymore. To win the argument, the lawyer/legal researcher of the future (or rather, the present) must have conceptual knowledge of the technology he seeks to assess – be it to criticize, regulate, or use it. Not only does understanding technology and writing about it in a ‘not purely legal’ way add credibility to one’s own analysis, reproaching someone’s ‘lack of technological understanding’ may become the most effective tool in rebutting a colleague’s arguments.