Clouds is a documentary project sharing stories and experiences about technology and code. It aims to show a collection of thoughts and creations about and with the usage of code, how computer replacing the hand made creation, asking questions about what it means to live in our world right now, and specifically, in connection to us as students at ITP, the road that led a lot of people that we “grow up” on, and stories about their projects and development.

The truth is that I had few emails with James George a long time ago, even before ITP on how my Clouds file doesn’t work on my Windows computer. It’s funny that few years later I sat on his class “Computational portraiture”. Since we couldn’t make my file work in that time, and only had bites from it, now was a close to complete watching of this project.

The experience is told in a non linear way, in which you can choose if to play it as one travel between people and experiences, following questions on the way, or, if to choose to watch it by a directory of people, subjects or visual sketched that in most of them you can interact with.

The time representation on clouds is a bit tricky, as I can see it as a world of its own, filled with people’s depth recording, but you can also see it as a journey through question you ask. If you look left and right you could go to the previous story and next one, but still it feels like an isolated island of stories.

Most of the time, the journey can be let by itself, but the experiences sort of break that routine and from watching people talk, you can hear them and play with one of the interactions which most of the time trigger you to activate them.

The story in cloud can be non linear but also centric. You can create you own adventure , but you can also go back to the research, and continue where you wish to go..It doesn’t actually feel like it have an end…there is a circulate timing so actually I felt like I can never finish the experiences, which might be a good point to show how our opportunities with code are so endless..

I think that as a story, or as a collections of stories, it’s a very important ones, especially because it’s mostly people that we, as an ITP community believe to lead or break through different aspects that we believe creating our future, or actually telling more or less common stories of how we got into coding or technology, and it’s in one hand very personal, and on the other very technological and have that wire frame cold look of it, like they are trapped in a processing sketch..

I think that it’s an important piece, and I was happy to see people that I know talk there, and in some ways made me want to know more about coding and wish to be better at this, but it also gave me a lot of other ideas and thoughts on how the usage in code can change a lot around us and how we should be open about our projects and creations.

Stephanie and I met in order to start working with depth kit. We wanted just to check what’s the process that needed to make a volumetric recording. After recording ourselves we had some visualizer problems when our depth data layer wasn’t sync with our rgb info. After a lot of debugging and meeting Or (1st year) on the floor, and dealing with a slow computer, we got to use the geometry tab that helped us a lot in order to use only the data the we need. I think that for a first shot in the kit we got nice results. Next thing that we’re going to do is to get few scans of objects that we can use as a portal- subway station, elevators and stairs, in order to create a scene that you can cut to another scene from it, and create a story between those locations.

I decided to have my object from Israel, and ask a friend to take photos for me from a place that looks like a subway entrance, but really isn’t and actually is a really smelly public toilet from the middle of Tel-Aviv. My friend had some shots through his iPhone but still I could get a really nice result from Photoscan.

For the final project I’m collaborating with Annie and Jordan, in their archival project AR platform they want to create.

I think it’s an exciting idea, and the thought of trying to make this work, means a lot to how we can look at archival materials in general, but also for location data in AR and the different layers that we can apply information and show it in a creative way.

Our research start with an AR professor, Mark Skwarek, AR guru at NYU who teaches at the engineering school, in order to check more ways to bring to life the AR archive footage as a layer on top of the real world, following geo information. I was also approaching Ziv Schneider and Julia Erwin, whom I study in another class, and Ziv had few suggestions of services that can help us, like so I should explore that area. We also in touch with Todd Bryant that teaches body in motion, and in general , he’s a genius. A few examples tools that we need to look into:

Our final midterm got a title this week, after testing and watching how our idea can actually be used, we decided to title it “Unidentified halo”. Our product was able to make an exposure that made a surveillance camera at ITP- not to recognize a face.

Our work on the final piece was made out of a circuit with 25 Infra red LEDs connected to a potentiometer, that controlled the power. It took power from a lithium battery to it light and small power source.

As you can see in the next example, we put our image in the google cloud platform service in order to test our hat- and it worked- the service didn’t recognize Rebecca’s face wearing the hat.

Our goal was to make a product that protect you from surveillance in an elegant way. We started from the story of our worry for our privacy and ended up creating a tool that can help us and others preventing unintentional identity sharing.

This archive includes recording from different missions lift off’s and communications from earth to astronauts in missions. I chose to use”The golden record” recordings, which are the recordings that Nasa chose to include aboard in Voyager 1 and 2 spaceships in 1977.

Those sounds were selected by a group of people at NASA and intended to be found by any intelligent life form or future humans. The golden record include sounds from earth like humans, nature, animals and also include images that were decoded to be analog scanned. I just wonder of any other life forms could actually make this scan work…

The record also include Bach, Mozart, Beethoven, Stravinsky, Chuck Berry and more, and also carries an hour long recording of the brainwaves of Ann Druyan (American writer, creative director of the golden record and Carl Sagan’s wife – Head of the team of the project ). During the recording of the brainwaves, Druyan thought of many topics, including Earth’s history, civilizations and the problems they face, and what it was like to fall in love.

I decided to create a space scene and use the welcome audio that included in the record. Those audio files are recording of people greet in different languages.

I started with some tutorials from Unity’s website and created a game, thinking that maybe I can change the object and maybe only after you’ll get close to an object you can hear the greeting. The game works well but require some coding in order to change some of the functions for using other assets, so I decided to go to the example package that we had in the class. I started with changing the skybox to space, and fell into the asset store rabbit hole ending up downloading a space scene. Then I added 2 objects- one is the golden record itself made our a cylinder, and a quad that has the instructions of how to read the record. I made the spaceship in the scene parent of both of those objects so they could move together.

I also tried this week to see “CLOUDS” on the oculus. Because I never had anything shown on the Oculus from my computer, I started with the new rift version that we have in the ER, but had some trouble set it up on our PC’s..and after few long tryouts, I went back and got the DK2 version and it worked really fast. Although the experience can be seen on the computer, it was nice to see it on the headset. Gonna continue with the movie in the next week…for now I can say that I enjoy it, but not really sure if i’m following the right structure of the movie. It’s made out of a lot of connections between scenes and interviews and keywords, which is quite confusing…but probably I need to spend more time on this to understand better how to navigate it.

Researching around biometric data raising a big challenge. Biometric data collection raises privacy concerns about the ultimate use of this information, not only that it’s information that you carry all the time, it’s information that you can’t reset. Once this data found it’s way to government or other data bases, it’s there and can’t be deleted.

The Problem

losing your anonymity.

The Audience

For this project in general, is everyone that can be seen in public surveillance cameras.

This project is a collaboration with Rebecca ricks, who was also interested in this subject and we came up with 4 design ideas.

1. A physical artifact of data: Physical installation that gives the user personalized information based on a biometric input.

4. Sell a biometric data on eBay: Collecting pieces of personal data from participants and sell them out on eBay, in order to gauge the monetary value of biometric data.

The kit idea is something that we both would like to continue with, after our research and talk till now, we thought about few options:

Personas

We wanted to create specific events where out product can be used:

In the research phase we also contacted Professor Nasir Memon, who is a professor in the Computer Science and Engineering at NYU Tandon.His research interests include digital forensics, biometrics, data compression, network security and security and human behavior. We contacted Prof Memon, and he was thrilled to hear that someone has common interest in trying to interfere biometric data collection, and we met 3 of his students researchers: Philip J Bontrager, Kevin Gallagher and Thanos Papadopoulos. We shared our research with them and they shared some of the ideas they had with their professor. They had a lot of interesting ideas, and it seems that our goal of preventing data collection is ambitious, and that this technology always keep getting better and better, and trying to defeat all methods, can be bigger task than what we thought. They suggested to try and test one of the ideas, the surveillance camera’s hat, which involve IR lights that can prevent identifying your face in curtain cameras. We still in contact with Adam Harvey who seems to be very busy.

We also talked to Lior Ben Kereth, who is one of the founders of “face.com” an Israeli start up dealing with face recognition algorithms. His start up was bought by Facebook, and now they work in the HQ in San Francisco. In our conversation Lior explained that face recognition using classic machine learning methods, that classify what is a face, then it detect it and after that it’s recognizing it. Few of the methods that Lior thought can trick the algorithm is to have a t shirt with faces, that can distract the recognition, and also, contrast by make up. Lior also mentioned that all Europe and Canada, not holding their biometric database and u need to ask to join it. Lior explained how the Facebook face recognition works: the algorithm take pictures that you’re been tagged, then it maps it on a 3d face, in order to create your face in any direction. Any tag adding to your database, that way it can learn and aggravate using pictures of different foods, different age. Lior also mentioned, not about Facebook , but in general, using mood detection while watching films , that way you’re data watching the movie can be collected like when you laugh. Lior say Facebook is not collecting your data as biometric data but as social data. companies like Facebook, need their audience trust in order to exist, and they wouldn’t harm it.

We also met with Eric Rosenthal, who explained us about some of his research and experiments on surveillance use in the city. He suggested that we will specify the kinds of machine we aim to disrupt the data collection from- and that way we can be more focus on finding the solution.

Our next step is to prototype using IR (infra red) lights and see how it will effects the cameras that we have here at ITP, and next- cameras outside.

Data art assignment gonna be spread into 4 sections, each one will be spread over 3 weeks. Our first assignment was to create an aesthetic piece from data set, that we would be happy to have in our room.

I started to look for some data sets in “Data in plural” newsletter that have a collection of great data set from different sources. I found some interesting stuff like collection of all the nuclear bombings in the world, I also had some interest in LIDAR data that was given for few cities. It’s information of height that taken with a low plane taking pictures of the area underneath it, and then you can create 3D models from the data those pictures collect. It just seems to be a way to create something in 3D. Then I found data set that I thought will be great to CNC and create 3D visualization from: Antarctica and the Arctics Ice extant.

I found a source than collect satellite pics from each year, and I chose Antarctica, just because of its island shape that I can work with. I decide to create the earliest data and the latest area: 1979 and 2016, 37 years of Ice extant.

The information is available at The National Snow and Ice Data Center (NSIDC) website, and is available as an FTP directory.

I created vectors smoother vectors from the images and created one land and two Ice extant models.

The feel of those 2 objects turned out different, the tangibility of the shapes in the wood let you feel the differences between those two periods of time.

For the final subtraction project, I decided to combine a class that i’m in “Programming design systems” with Rune Madsen. I was thinking that it will be cool to create a system that can help cut shapes on the CNC. I think that a custom solution can be good to create original pieces, and maybe easier than using softwares like illustrator, essentially you need for a CNC cur only a vector- which I can program the code to have as the final out put.

I started to explore both materials and ways to control a system in code. The code was leaning much of an earlier project, where I made a random Roy Lichtenstein randomizer, that creates his work “Explosion” (1965-6). On the bottom is the original, on the top image is the code creation, that random the sizes of all shapes. the code based on sin and cosine elements.

By using the code to control dots around a circle, we can change distance between dots, we can change the amount of the dots in a circle, and also curves. You can see the final result of the code here. The shapes can be designed and then saved as an svg.

I started with testing the materials that can be used for the pieces, with the thought in my mind that maybe having more than one material can help the variety of the final pieces. Plywood was great to test regrading the width of the material, but the plies sometimes chopped or missing inside the piece of wood. I tried also maple, 1″ thickness, which came out pretty solid. I also tried pine, which gave good result but the pieces felt really light. I also tested aluminum on the 4 axis machine, which didn’t gave good results. I decided to continue with the maple.

I started cutting shapes, and combining them into different creatures, in order to future assign shapes to animals. Because it’s a creation of animals that is using code, I called the project “CoSin Zoo”.

I decided to add the simple shapes attributes of specific animals in order to help shaping the animals, I made 3 animals for the final piece- deer, sheep and a blowfish/or some people say- a porcupine.

Instead of using the male female version of clicking the pieces together, i made a hole that you’ll slide the pieces together, also as a feature to include more materials in the piece.

After all the pieces was ready I used the wax wheel to finish the wood.

To get a good result in the 4 axis mill, I thought to try a round shape – something that is pretty difficult to do by hand, not mentioning details and refining. Because pretty much all my project with this machine turned well only when I used the rounded bit 1/4″, I decided to create a real moon model.

At this point the machine stopped and reset the project, probably I had not an even piece, and it thought to have much lower piece that it has. I started a new piece and tried to be more accurate:

Finally, because my first work delayed my time on the machine, I had to stop it. So I ended up with half moon. It looked like the machine is going int he right way, and that if I had more time I could sand the tabs and make it a full sphere.