デイビッド

Hi! My name is David da Silva.

I made this website to show-off my design skills display the case studies of some of my favourite projects / experiencies. It is by far not as complete as I'd like, but I don't have time to work on it right now.

Some of my passions are:

multiplayer games, its networking algorithms, and team strategy (e.g. in e-sports)

Blog posts & articles

Gamedev / Product Design projects

These are some of my most treasured projects or experiences. Sadly, I haven't had the time to write a proper description for each of them, nor to add all of them here, but I can def chat about them if you ask me!

HackMed 2018 has been my favorite hackathon. I went alone and teamed up with the people I shared a table with, which ended up being exactly the kind of people I was looking for: lovely people from diverse backgrounds.

Alice Smith, major in Biomedical Engineering

Alex Shmerg Schudel, major in Maths

Joelle Cheong, major in Politics and Sociology

Description WIP.

#artpluscode: programmatic art experiments

Generated using JavaScript and HTML5 Canvas. I publish them first on my Instagram account, and sometimes I stream their creation on Twitch and upload the recordings to YouTube. All the code is released on GitHub.

The #artpluscode wall website is probably the best way to view them, but it doesn't have every single piece released on Instagram.

Every year around 600,000 people are going to have a cardiac arrest, heart disease alone is the leading cause of death in the US, claiming 600,000 lives per year. The chances of surviving are 4x if a first responder is in the scene, but only ~20% of the people know what to do.

LifeSaber is an Android and Android-wear app that solves this problem, by making anyone a first responder and life saver, doing the following:

EscapeRoom
Interaction Design & Prototyping
START Hack 2016

Long story short, Logitech put at our disposal a Tobbi Eye Tracker. I started thinking about new interactions in games that could use them, or existing ones that could be improved through its use.

I had recently been playing Tomb Raider on the Xbox ONE, and I remembered these two aspects of the game that left me wishing for a better experience:

I didn't need to observe the environment and think about what could be collected or interacted with, because those items would flash/shine in an obvious way, making me fall into a "detect shining things" mode. Or I would just end up walking next to everything just in case a context menu would pop up.

when inspecting a relic to find an engraved description or detail on it (a kind of puzzle this game has), sometimes the game would consider that I had found it and unlock it for me, even if I visually hadn't noticed the detail, simply because after rotating the relic I had coincidentally stopped rotating it with the detail facing the camera. Boom! Challenge gone, for free. So fun.

See, the problem here is capturing the player's intent. How do I discern between the player walking next to the bottle just because the bottle is on their way, or because the player wants to interact with the bottle? How do I confirm to the player that they can indeed interact with the item, without giving away that it's an interactable item when the player had no intention of interacting with it? This is specially hard in a 3rd person controller-based game – first-person games can at least get away with using the center of the screen as a pointer.

The relic puzzle has a similar problem: even if the engraved detail is visible on the screen, even on its center, it doesn't mean that the player has noticed it. A simple addition like making the player press a button when the detail is right in the center of the screen would have avoided this problem.

Back to the hackathon, these two aspects were perfect test candidates for being improved with an Eye Tracker, which would be used as a pointing device to understand the player's intent: if the player is next to the bottle, and their eyes are pointing to the bottle, it is very likely that they want to interact with the bottle, so bring up that context menu. In the relic puzzle, if they fixate their eyes on the engraved text, that means that they have noticed it.

Sounds great and exciting, right? Sadly, we couldn't get much working at the hackathon. It was a 24h one, we aimed too big instead of focusing and making incremental progress, we had hardware issues with our laptops and the Leap Motion (with which I wanted to be able to rotate the relic using my hand), I needed a lot of sleep time, and we didn't organize too well (integrating like 4 parallel Unity projects, hello darkness my old friend).

Overcharge, GGJ18
Game Design, Thruster UI, and Networking Code

The theme was "Transmission". Inspiration:

"Multiplayer Online Game Development" Course
Everything
Sixth Edition

This June 25-29 I'll be holding in Barcelona the Sixth Edition of the course. You can sign up and find more info here. And if you know of anyone that could be interested, please, let them know! I would have loved to attend a course like this one when I was younger, personally – it would have propelled me so much.

Physics-based Sonic RidersEverything but the 3D assets and music/sfx

My first solo Unity project, a homage to one of my favorite games. The controls are very hard, similar to drone racing (yaw, pitch & roll).

I worked on this in preparation for Improbable's interview – I had barely touched Unity before, and an assigment required modifications to a Unity project. During the internship, I quickly added multiplayer support using Improbable's SpatialOS, to test how it was to integrate the SDK into an existing game.

Fun bug I had: I made the jump force depend on `deltaTime`, so I sometimes had radically different jump heights (playing on a laptop contributed). Through this I learnt that one-off forces should not depend on time.

Wrote this AI in < 1h using ML (slightly custom kNN). It is trained by observing a real player drive, and trying to replicate what the player would do in the same situation (stores memories as [pos, vel, inputs], during execution finds closest memory to situation, and mimics). pic.twitter.com/o1qIM8Q7yz