Archive (Page 1 of 2)

BJ Copeland states that a strong AI machine would be one, built in the form of a man; two, have the same sen­so­ry per­cep­tion as a human; and three, go through the same edu­ca­tion and learn­ing process­es as a human child. With these three attrib­ut­es, sim­i­lar to human devel­op­ment, the mind of the machine would be born as a child and will even­tu­al­ly mature as an adult.

The big con­cerns that I have about arti­fi­cial intel­li­gence are real­ly not about the Singularity, which frankly com­put­er sci­en­tists say is…if it’s pos­si­ble at all it’s hun­dreds of years away. I’m actu­al­ly much more inter­est­ed in the effects that we are see­ing of AInow.

I’m inter­est­ed in data and dis­crim­i­na­tion, in the things that have come to make us unique­ly who we are, how we look, where we are from, our per­son­al and demo­graph­ic iden­ti­ties, what lan­guages we speak. These things are effec­tive­ly incom­pre­hen­si­ble to machines. What is gen­er­al­ly cel­e­brat­ed as human diver­si­ty and expe­ri­ence is trans­formed by machine read­ing into some­thing absurd, some­thing that marks us as dif­fer­ent.

One of the most impor­tant insights that I’ve got­ten in work­ing with biol­o­gists and ecol­o­gists is that today it’s actu­al­ly not real­ly known on a sci­en­tif­ic basis how well dif­fer­ent con­ser­va­tion inter­ven­tions will work. And it’s because we just don’t have a lot of data.

The ques­tion is what are we doing in the indus­try, or what is the machine learn­ing research com­mu­ni­ty doing, to com­bat instances of algo­rith­mic bias? So I think there is a cer­tain amount of good news, and it’s the good news that I want­ed to focus on in my talk today.

Computers can tell sto­ries but they’re always sto­ries that humans have input into a com­put­er, which are then just being regur­gi­tat­ed. But they don’t make sto­ries up on their own. They don’t real­ly under­stand the sto­ries that we tell. They’re not kind of aware of the cul­tur­al impor­tance of sto­ries. They can’t watch the same movies or read the same books we do. And this seems like this huge miss­ing gap between what com­put­ers can do and humans can do if you think about how impor­tant sto­ry­telling is to the human con­di­tion.

We have increas­ing­ly smart, sur­veil­lant per­sua­sion archi­tec­tures. Architectures aimed at per­suad­ing us to do some­thing. At the moment it’s click­ing on an ad. And that seems like a waste. We’re just click­ing on an ad. You know. It’s kind of a waste of our ener­gy. But increas­ing­ly it is going to be per­suad­ing us to sup­port some­thing, to think of some­thing, to imag­ine some­thing.

Machine learn­ing sys­tems that we have today have become so pow­er­ful and are being intro­duced into every­thing from self-driving cars, to pre­dic­tive polic­ing, to assist­ing judges, to pro­duc­ing your news feed on Facebook on what you ought to see. And they have a lot of soci­etal impacts. But they’re very dif­fi­cult to audit.

Quite often when we’re ask­ing these dif­fi­cult ques­tions we’re ask­ing about ques­tions where we might not even know how to ask where the line is. But in oth­er cas­es, when researchers work to advance pub­lic knowl­edge, even on uncon­tro­ver­sial top­ics, we can still find our­selves for­bid­den from doing the research or dis­sem­i­nat­ing the research.

The smart­phone is the ulti­mate exam­ple of a uni­ver­sal com­put­er. Apps trans­form the phone into dif­fer­ent devices. Unfortunately, the com­pu­ta­tion­al rev­o­lu­tion has done lit­tle for the sus­tain­abil­i­ty of our Earth. Yet, sus­tain­abil­i­ty prob­lems are unique in scale and com­plex­i­ty, often involv­ing sig­nif­i­cant com­pu­ta­tion­al chal­lenges.