Monday, November 26, 2018

Like many of you I woke up this morning to an email inbox full of leftover Black Friday ads, a whole bunch of Cyber Monday ads, and the Xth day in a row of #GivingTuesday announcements.

Among those was the first clearly-designed-to-misinform #GivingTuesday astroturf email that I've received.

It came from the Center for Consumer Freedom (CCF) - a nonprofit front group run by a lobbyist for a variety of industries including restaurants, alcohol, and tobacco. The umbrella group for CCF - the Center for Organizational Research and Education (CORE) - is also home to HumaneWatch. According to the 2016 990 tax filing for CORE, HumaneWatch exists to "educate the public about the Humane Society of the United States (HSUS), its misleading fundraising practices, its dismal track record of supporting pet shelters and its support of a radical animal rights agenda."

(clip from 2016 990 for CORE)

The email I received from CCF linked to a YouTube "ad." But all of it - the website consumer freedom, the email I received, the work of these nonprofits - all lead back to a commercial PR firm Berman and Co, which has been accused of setting up these groups as part of their paid work for industry. None of this was revealed in the email - and if you look at the website for CCF to find out who funds it you find this statement:

"The Center for Consumer Freedom is supported by restaurants, food
companies and thousands of individual consumers. From farm to fork, from
urban to rural, our friends and supporters include businesses, their
employees, and their customers. The Center is a nonprofit 501(c)(3)
organization. We file regular statements with the Internal Revenue
Service, which are open to public inspection. Many of the companies and
individuals who support the Center financially have indicated that they
want anonymity as contributors. They are reasonably apprehensive about
privacy and safety in light of the violence and other forms of
aggression some activists have adopted as a “game plan” to impose their
views, so we respect their wishes."

If you check the CCF's 990 form (Search under CORE) you'll find that on revenue of $4.5 million (sources undisclosed), the largest expense was $1.5 million paid to Berman and Co, for management fees. Next largest expense is $1.4 million spent on advertising and promotion.

There's no virtue in this circle - just paid lobbyists setting up nonprofit groups to counter the messages of other nonprofit groups. On the one hand, the nonprofit sector must be doing something right when the tobacco and alcoholic beverage industries are trying to shut them up. On the other hand, good luck to you - average donor - trying to figure out what's real and what's not. Even the watchdog groups are sniping at each other

I've written before about misinformation, the current ecosystem of distrust, and civil society. And here it is. Be careful out there.

Saturday, November 17, 2018

Depending on a commercial company for our giving infrastructure is problematic in several ways. First, at any point in time the company (and this company has done this repeatedly) can change it's commitment, algorithm, priorities and leave everyone who was using it without recourse. Second, we have no way of knowing that the company's algorithms are offering all the choices to all the people. How would you even know if your nonprofit or fundraising campaign wasn't being shown to those you were trying to reach? Third, Facebook owns this data and can tell us whatever they want about it. Maybe $1 billion was given, maybe it was more, maybe it was less - how would we know?

There's an existing infrastructure for measuring giving in the U.S. and a number of research centers that analyze and report on those trends every year. That infrastructure - from 990 tax forms to The Foundation Center, Guidestar, the Urban Institute, and independent research from Giving Institute or the Lilly School at Indiana U - was built for the purpose of public accountability, to protect the democratic values of free association and expression, and for industry-wide performance improvement. This infrastructure is not perfect. But the data they use and their analytic methods can be checked by others - they can be replicated and verified following the basic tenets of sound scientific practice and good evidence practices for policymaking.

There needs to be new ways to understand what's happening on these proprietary platforms - especially if Facebook is moving $1 billion and GoFundMe $5 billion. Those are big numbers about our nonprofit sector. We need to be able to interpret these data, not just reflexively believe what the companies announce.

Friday, November 16, 2018

I've had countless conversations with well-intended people from a number of social sectors and academic disciplines who are working on digital innovations that they firmly believe can be used to address shared social challenges. Some of these approaches - such as ways to use aggregated public data - are big investments in unproven hypotheses, namely that making use of this data resources will improve public service delivery.

When I ask these folks for evidence to support their hypothesis, they look at me funny. I get it, their underlying hypothesis that better use of information will lead to better outcomes seems so straightforward, why would anyone ask for evidence? In fact, this assumption is so widespread we're not only not questioning it, we're ignoring countervailing evidence.

Because there is plenty of evidence that algorithmically-driven policies and enterprise innovations are exacerbating social harms such as discrimination and inequity.We are surrounded by evidence of the social harms that automated
decision making tools exacerbate - from the ways social media outlets are
being used to the application of predictive technologies to policing
and education. Policy innovators, software coders, data collectors need
to assume that any automated tool applied to an already unjust system
will exacerbate the injustices, not magically overcome these
systemic problems.

We need to flip our assumptions about applying data and digital analysis to social problems. There's no excuse for continuing to act like inserting software into a broken system will fix the system, it's more likely to break it even further.

Rather than assume algorithms will produce better outcomes and hope they don't accelerate discrimination we should assume they will be discriminatory and inequitable UNLESS designed specifically to redress these issues. This means different software code, different data sets, and simultaneous attention to structures for redress, remediation, and revision. Then, and only then, should we implement and evaluate whether the algorithmic approach can help improve whatever service area they're designed for (housing costs, educational outcomes, environmental justice, transportation access, etc.)

In other words, every innovation for public (all?) services should be designed for the real world - which is one in which power dynamics, prejudices, and inequities are part of the system into which the algorithms will be introduced. This assumption should inform how the software itself is written (with measures in place to check for and remediate biases and amplification of them) as well as the structural guardrails surrounding the data and software. By this I mean implementing new organizational processes to monitor the discriminatory and harmful ways the software is working and the implementing systems for revision, remediation and redress. If these social and organizational can't be built, then the technological innovation shouldn't be used - if it exacerbates inequity, it's not a social improvement.

Better design of our software for social problems involves factoring in the existing systemic and structural biases and directly seeking to redress them, rather than assuming that an analytic toolset on its own will produce more just outcomes. There is no "clean room" for social innovation - it takes place in the inequitable, unfair, discriminatory world of real people. No algorithm, machine learning application, or policy innovation on its own will counter that system and its past time to keep pretending they will. It's time to stop being sorry for or surprised by the ways our digital data-driven tools aren't improving social challenges, and start designing them in such a way that they stand a chance.

Wednesday, November 14, 2018

You know the old trope about how people looking on the ground around a streetlamp for their lost keys, even if they lost them down the block, simply because "that's where the light is?"

(http://creepypasta.wikia.com/wiki/The_Man_Under_the_Street_Light)

This is a little like how I've been thinking about generosity. We associate generosity - especially in the U.S. - with charitable giving to nonprofits. Everything else - volunteering time, giving to politics, direct gifts to neighbors or friends or others, mutual aid, remittances, shopping your values, investing your values - those things are all something else.

And, yes, the motivational and behavioral mix for these actions may be different. But we make a mistake when we center the one - charitable giving - and shift everything else to the edge and think that's based in human behavior. It's actually based in politics and industry.

In the U.S we've built an infrastructure of organizations (nonprofits) that take up a lot of space in the generosity mix. And we make them register with the government which allows us to count them. And we require them to report certain actions which then allows us to track giving to them. Those decisions were political - and have to do with values like accountability and association and expression.

On top of those registries and reports we've built big systems and organizations to make sense of the information. Some of those institutions (Foundation Center) were built as an industry response to possible regulation. Some of those institutions (Guidestar) were built because there was a huge data set of nonprofit tax forms that existed by the 1990s. These data sets served as the "lights" that helped us "see" specific behaviors. It wasn't that other behaviors weren't happening, it's just that there weren't lights shining on them.

Shining a light on these behaviors was done to better understand this one type of generous act - it wasn't done with the intention of judging the others as lesser. But over time, all the light has focused on charitable giving to nonprofits making it seem like the other behaviors weren't happening or were less important, just because the light was not shining there.

The more the full mix of behaviors happens on digital platforms, the more lights get turned on. Where it is hard to track a gift of cash to a neighbor in need, crowdfunding platforms that facilitate such exchanges (and credit card trails) bring light onto those actions. And because more and more acts take place on digital platforms - Facebook claims to have moved $1 Billion in last year - we can now see them better. The digital trails are like shining new lights on old behaviors.

Think of it like a house of generosity. In one room are donations to charitable nonprofits. In the USA, the lights have been burning bright in this room for decades. In another room is contributions to houses of worship. Down the hall is the room of money to neighbors/friends in need. Another room is where shopping for some products and not others happens. Downstairs is investing in line with your values. There's a room for political funding and and one for spending time rallying around a cause. Other rooms hold remittances or cooperative funds or mutual aid pools. As each of these behaviors shifts to use digital platforms - be it online portals, social media, texting, or even just credit card payments - its like turning on the light in those rooms. We can "see" the behaviors better, not because they're new but because the digital trails they create are now visible - the light is shining in all those
rooms.

Digital trails shine lights on lots of different behaviors. We can see things we coudn't see before. It's going to be increasingly important that we have public access to data on what's going in the whole house, not just certain rooms. Right now, the data on many of these behaviors is held in closed fashion by the platforms on which the transactions happen - crowdfunding platforms know what happens on them, Facebook tells us what happens there, and so on. We're dependent on the holder of the light to shine it into certain rooms. This isn't in in the public's interest. Having the lights turned on is better than being in the dark, but having public access to the light switches is what really matters.

Sunday, October 21, 2018

I've been talking to a lot of nonprofit and foundation folks + software developers lately. The good news is these two communities are starting to work together - from the beginning. But there is a long way to go. Just because you're working in or with a nonprofit/social sector/civil society organization doesn't mean unleashing the most sophisticated software/data analytic techniques is a good thing. In fact, using cutting edge algorithmic or analytic techniques that haven't been tried before in an effort to help already vulnerable people is quite possibly a really bad idea.

I've come to believe that the first question that these teams of well meaning people should ask about whatever it is they're about to build is:

"How will this thing be used against its intended purpose?"

How will it be broken, hacked, manipulated, used to derail the good intention it was designed for? If the software is being designed to keep some people safe, how will those trying to do harm use it? If it's intended to protect privacy, how will it be used to expose or train attention in another dangerous way?

Think about it this way - every vulnerable community is vulnerable because some other set of communities and structures is making them that way. Your software probably doesn't (can't) address those oppressive or exploitative actors motives or resources. So when you deploy it it will be used in the continuing context of intentional or secondary harms.

If you can't figure out the ecosystem of safety belts and air bags, traffic rules, insurance companies, drivers' education, and regulatory systems that need to help make sure that whatever you build does more help than harm, ask yourself - are we ready for this? Because things will go wrong. And the best tool in the wrong hands makes things worse, not better.

Friday, October 12, 2018

A lot of work on responsible data practices in nonprofits has focused on staff skills to manage digital resources. This is great. Progress is being made.

Digital resources (data and infrastructure) are core parts of organizational capacity. We need to help board members understand and govern these resources in line with mission and in safe, ethical and responsible ways.

Digital data and infrastructure need to become part of the regular purview of boards in thinking about liabilities and line items.

Wednesday, October 03, 2018

No one reads the Terms of Service. Few of us understand who has access to the data we generate all day every day. Rachel Maddow and others continue to refer to Cambridge Analytica/Facebook as the former "stealing" data from the latter, when actually, the latter's business model depended on the former doing what it did.

Our (us as people and civil society) relationship with the companies that make our phones, sell us internet access and data plans, "give" us apps, social media feeds and "free" cloud storage is a mess. Part of it the problem is the metaphors. So here's a new one. Don't think of the software, internet, cloud, app, hardware companies whose products you use as vendors, think of them as landlords.

Then think about how you read your lease. How you ask for better terms and negotiate for buildouts or rebates. And how, if they told you they'd be coming in and rummaging around in your file cabinets at any time of day or night, taking what they wanted, claiming it as their own, using it to sell to other renters, and even selling it - you'd run.

People are beginning to recognize the creepy landlord relationship they have with their tech vendors. Nonprofit organizations and foundations who depend on Facebook and/or its APIs, Salesforce and its Philanthropy Cloud, Google docs or hangouts - they're your landlord. You're running your programs and operations in their space. By their rules. You wouldn't stand for it in physical space - why do so in digital space?

Civil society exists in and depends on digital data, gadgets, and infrastructure. We rely on the norms and policies that shape the digital environment, are able to use digital tools to advance our goals, and are subject to the manipulation and sabotage of digital spaces.

The "Russians hacking boy scouts" is a great headline to make the point. Anyone with a desire to manipulate opinions - which includes advertisers, hackers, politicians, extremists, ideologues and all kinds of others - knows that our digital dependencies make it easier than ever to do so through supposedly trustworthy institutions, like nonprofits and "news" sites. In a time of information warfare everyone and every institution operating in the digital space is potentially on the battlefield - intentionally or unwittingly.

Practical, immediate meaning of this for every nonprofit? Your digital presence - website, communication on social media, outreach emails, everything - exists in an information ecosystem that is being deliberately polluted with misinformation all day, every day, on every issue. If your communications strategy still assumes that "hey, they'll trust us - we're a nonprofit" or "hey, this is what the data say" then I recommend you reconsider both what you say, how you say it, how you protect what you say, and your expectations and responses to how what you say gets heard and gets used.

You may well be speaking truth. However, the digital "room" you are speaking it into is one filled with deliberate, diffuse distractions and detractors (at the very least). It's like trying to show someone a clear picture in a room full of fun house mirrors. It's time civil society started "assuming" (as in taking as a starting point) that the digital environment in which it exists is one of distortion and distrust and start building effective, trusted, and meaningful strategies from there (instead of being surprised each time things go wrong).

Thursday, August 30, 2018

"Flexible Giving Accounts: A new bipartisan bill (H.R. 6616)
has been introduced to allow employers to create flexible giving
accounts, enabling employees to make pre-tax payroll deductions of up to
$5,000 per year into an account through their employer and designate
the nonprofits to receive the funds. Employers would be able to
establish and administer the accounts as part of a cafeteria plan as a
fringe benefit to attract and retain employees. While seen as promoting
charitable giving, the legislation raises questions about whether
employee confidentiality can be protected, whether employee giving
options would be expanded or limited by employer preferences, and
whether administrative fees will eat into the donations along the lines
seen this year in the Combined Federal Campaign."

Hmmm. What's this all about? And who's behind it? And who stands to gain? I haven't had time to do much digging but it's on my radar.

Here's the Bill. Here's some PR on it from Representative Paulsen of Minnesota who introduced it. Here's an interview with Dan Rashke who's foundation supports a 501 c 6 organization, The Greater Give, that is promoting the bill. Rashke is the CEO of TASC, a company that sells software to companies to manage their workplace giving campaigns.

The idea (a I understand it so far) - let workers at companies (that participate in the Combined Federal Campaign? United Way? Any workplace giving?) designate up to $5,000 in pre-tax funds to campaign-selected nonprofits. The employee contributes to their account over time, and the funds are then paid out on a pre-determined schedule.

The Bill's supporters claim the goal is to increase participation in workplace giving campaigns and counter the potential decrease in giving predicted as a result of the Tax Reform Act's changes tot he standardized deduction.

I don't know the answers to these questions but here's my concern - Are FGA's the a new donor advised funds (DAFs)? If so, will it become a "monster" like DAFs have become? There seems to be ample opportunity in the idea as I see it for two beneficiaries - vendors of workplace giving software (like TASC) and money managers. The promised benefit to "everyday philanthropists" (their language, not mine) and communities is....that more people will participate through workplace giving campaigns because of the pre-tax deduction built into the FGA. That's a long-term aspiration built on assumptions about tax incentives and giving, promised on the back of short-term money making opportunities to existing software vendors and (maybe?) money managers.

Tuesday, August 21, 2018

Like people in general, civil society organizations are easily distracted by shiny objects. Although I pinged blockchain as a buzzword a few years ago, the frenzied hype of last year's bitcoin boom and bust (probably manipulated) finally brought the idea through to general public.

There probably are good social sector uses of systems that permanently store information and make transactions verifiable by distributed participants. And there's lots of experimentation going on. My rule of thumb? Experiments with commodities on supply chains strike me as a safer place to start. Experiments with information about people strike me as disasters in the making.

Here's why:

The ethics of permanently storing information about people are treacherous,

The legal frameworks for information about people are dynamic and diverse,

The governance choices that shape different blockchains are poorly understood, and

The technology itself? C'mon. Are we gonna fall for this again? "Trust me, this software is secure and safe." First of all, that's what the blockchain is - software + rules made by people.

When it comes to the blockchain there is no "one thing," there are many, and they operate under all kinds of rules. Blockchain = software + rules made by people. If you don't know how the software works, what it does and doesn't do, and if you don't understand the governing rules, don't go using the stuff on humans (my rule of thumb). Blockchains promote encryption and permanence. It's scary how often people think they must be the right solution when the problem they're facing is bad database design, missing data, broken incentives, greedy partners, or oppressive governments.

We do need to be talking with experts from all sectors about if and when and how to use new types of software and governance structures. Software experts can explain what different systems can and can not do. Sector experts know the human and political problem and whether the challenge is one of security, verifiability, corruption, access or any permutation of these and other conditions. Governance experts know how rules get made, dodged, and broken and can advise on designing just systems that can be held accountable. The people who's data might be collected will know their concerns, now and in the next generations (remember, permanence means permanence).

Here are some useful resources from those doing good work on these questions:

Thursday, August 16, 2018

It's August 16, 2018 (Forty seven years to the day since President Nixon's administration admitted to keeping an "enemies list" of the American people).

Today, newspapers across the United States are running editorials on the importance of a free press and declaring their outrage at the way the current U.S. President treats and talks about the "media."

I've written before about how the last two decades in the "news business" might hold insights to the current and short-term future of civil society writ large. My previous comments have focused on the effects of the transformation to digital distribution (broke print journalism's ad revenue model), regulatory changes (which facilitated the creation of "news" monopolies in print and broadcast), and the entry of new players with different credentials (blurred the understanding of independent, credible news and opinions/propaganda).

It's time to add to this list, and learn from the collective voices being published today by more than 300 papers across the country. The list of changes and threats to journalism now, most notably, include direct attacks from the White House itself. But we also need to pay attention to the "successful" efforts over the last
years to sow doubt and confusion in the information environment, so that even the most careful, vetted, confirmed reporting now exists in a miasma of distrust and deliberate doubt.

Both of these changes - direct threats from the federal government and an environment of mistrust, distrust, and lies - should be added to the list of realities that face civil society in the United States. (May apply elsewhere as well, but I'm thinking U.S. at the moment)

Environment of mistrust include: Oh, come on. You've been paying attention - distrust of the news media is part and parcel of a corrosion of online communications. Once doubt takes over, it takes over everything. We are all communicating into and via an atmosphere where doubt rules. No association or organization should be assuming that their communications won't be manipulated or labeled false by opponents or political by platforms, that facts alone win, or even that their allies are who they say they are.

There was also YouTube's announcement earlier this year that it would rely on Wikipedia entries to help it deal with conspiracy theories. The company didn't even bother to tell the nonprofit in advance (let alone try to consult with the nonprofit as if it might have a say about this plan). This hasn't worked out that well for either YouTube or Wikipedia. Let's think about this. Wikipedia is run by a nonprofit but the work is done by a global network of volunteers, who - everyone knows - are by no means representative of the global population. YouTube is part of Alphabet, one of the world's wealthiest companies, and is itself one of the world's biggest social networks. It has it's own curatorial teams. And yet, as Wired notes, both Facebook and YouTube are outsourcing their responsibilities to nonprofits.

This seems unseemly even if you just think about it from an economic standpoint - big company relying on unpaid labor? Sounds like exploitation. When you start thinking about it in terms of the health of nonprofits or civil society the exploitation seems even worse.

Just like the open source community has built all kinds of technology that companies rely on, so too are nonprofits providing a kind of critical digital infrastructure in terms of their community voice, commitment to a set of ideals, expertise, and concerns for the vulnerable. Yet the current set of "partnership" arrangements seem destined to throw the nonprofit under the bus - the company saves money, gains reputation, and offloads both costs and liability. The nonprofit gets...what?

Tuesday, April 17, 2018

For those who don't want to click over (and you should) the piece discusses the technological work being done on digital identities - where you would control yours - and its implications for civil society and philanthropy. Go on, read it.

Thursday, April 12, 2018

One of many things that have been made more public during this week's congressional hearings with Mark Zuckerberg is the way in which the platform curates content. Zuckerberg bemoaned the reality that it's his job to decide who sees what when.

For those who study curation and platforms and internet law this is not new. I'm writing this while listening to Tarleton Gillespie discuss his forthcoming book (recommended) Custodians of the Internet. He's describing the rules, technologies, and people that make up the "moderation apparatus" - the systems that determine who sees what information, when, and from whom. Gillespies argues that this moderation is essential to what the platforms do - it is their value proposition. This runs counter to the longstanding mythos of the open web.

One of the elements of this "moderation apparatus" that Gillespie describes that catches my eye is the role of civil society organizations and nonprofits. Big companies, like Facebook but probably not only Facebook, rely on civil society to do their dirty work.

In Myanmar, civil society groups that were working with Facebook to take down hateful and violent postings pushed back when Zuckerberg claimed that the company was doing all it could to address these issues. The civil society groups noted that the company was essentially relying on them to voluntarily moderate the site and wasn't providing them with the engineering resources that were needed to do this. They secured a verbal commitment from Zuckerberg to improve the process.

Here's what this means:

Facebook was shifting its responsibilities to civil society.

Civil society groups aren't equipped for, or paid for, this role.

Civil society groups - by design - are fragmented and contentious. Choosing some of them to do moderation is a value-laden, editorial decision.

Civil society is - from Facebook's perspective in this example - just a low cost, outsourced labor source. It also, no doubt, shifts liability from Facebook to civil society (not least for the human psychological effects of moderating photos and posts about harm and violence).

Here's what I want to know:

How widespread are these kinds of commercial/civil society moderation/curation relationships?

How do they work - who's contracted for what? who's liable for what? what recourse exists when things go wrong?

What do civil society groups think of this? When might it be a good solution, from civil society's perspective?

(editor: Why didn't Leahy also ask Zuckerberg about Facebook's labor exploitation of those groups' volunteers - essentially relying on them as his workforce?)

Special Counsel Robert Mueller and the FBI are investigating the President's attorney for foreign payments to Trump's foundation.

Meg Wolitzer's new novel features a protagonist who works at a foundation. A review of the novel in Bookforum includes this wonderful line:

"...it takes an earnest but compromised nonprofit endeavor as a vehicle for
its ideas. With its magical relationship to money, the foundation helps
insulate Greer and her beliefs from the world beyond, at least until she
must confront the reality of what the suits are doing upstairs"

Saturday, April 07, 2018

Have you noticed an uptick of emails from companies like Slack,
Google, or PayPal, announcing new privacy policies and terms and
conditions? Why the sudden onslaught of updates? The answer is easy. The
companies sending these notices are changing their policies to meet the
requirements of the European Union’s General Data Protection Regulation (EU GDPR or just GDPR), which will put powerful new enforcement mechanisms into place, starting on May 25, 2018.

If you’re a U.S. resident, or working at a U.S. nonprofit or
foundation you may wonder what, if anything, the GDPR has to do with
you? Good question. There’s no simple answer for everyone outside the
EU. But just as those companies (all of which are based in the U.S.)
revisit their policies and practices because of the new law, it’s a good
idea for you to do so, too.

First, the GDPR probably applies to you, whether you know it or not.
It’s possible – depending on where your clients and donors live, where
your data is stored, or where you provide services – that your
organization is subject to fines for not following the new law. In this
case, compliance is more than just a good idea, it’s required.

Second, the GDPR is a prompt for a worldwide checkup on safe,
ethical, and effective data practices. Many of the GDPR’s provisions
align with the data governance principles and responsible data practices
that we at Digital Impact advocate for in civil society. Think of the
GDPR as providing a framework and set of user-centered guidelines about
data that may just align with your mission.
Many resources and consultancies are popping up to help organizations
comply with the GDPR.

Digital Impact is here to help you navigate
through it. We’re on the lookout for credible, accessible, and
affordable resources with particular resonance to nonprofits,
foundations, and civil society. In the coming months with help from our
community, we’ll be curating new content, holding conversations about
data governance and GDPR, and fostering discussion at digitalimpact.org/gdpr.

Check out our starting list of GDPR resources, send us others that you’ve found, and join the community in conversation. Want to share your view on the GDPR with the world? Become a Digital Impact contributor. And if there are topics, tools, or templates you need but can’t find, let us know. Maybe the Digital Impact community can help.

Sunday, March 25, 2018

Gun violence survivors. The extent and reach of this as part of the identity of millions of people in the U.S. was on heart-wrenching full display on Saturday, March 24. Thousands of people have survived the USA's totemic mass shootings (Columbine, Aurora, Charleston, Virginia Tech, Pulse, Newtown, Las Vegas, Parkland, there are too many to list). Hundreds of thousands, probably millions of Black people and others in poor, urban communities survive daily gun violence, perpetrated by both civilians and law enforcement. People in these communities have been naming the problem, identifying as survivors, and calling out the epidemic for decades. They survive despite the pain, grief, and identity-shaping nature of the experiences. Yesterday, people from across many different communities - bound by a shared identity that none of them chose - took full hold of the attention of the rest of the country, the media, the world.

Part of the NRA's success for so many years has been that gun owners identify as gun owners. It's not just something they care about, it's part of how they see themselves. This is the argument made byHahrie Han and other scholars. When an issue becomes part of your identity, you act on it - you participate in civic life, you vote, you hold politicians accountable, you show up.

Yesterday was a full scale display of how broad and big is the group that shares this unwanted identity. The breadth and depth and multi-generational nature of people who understand themselves as gun violence survivors.

Now that we've finally heard it and seen how many we are, perhaps this shared identity will contribute to civic action of a scale and persistence to match.

Tuesday, March 13, 2018

Thanks to The Engine Room and the Ford Foundation - this report clearly shows how the digital ecosystem is now core to civil society, the expertise needed, and the emerging infrastructure to support digital civil society. Read it now.

The logic, theory, and experiences that connect an open civil
society with a stable majority-run democracy are well known. Civil society is
meant to be a third space where we voluntarily come together to take action as
private citizens for the public good. Majority-run democracies need to, at the
very least, prevent those who disagree with them (minorities) from revolting
against the system. Civil society provides, at the very least, the
pressure-release valve for majority-run governments. Positioned more
positively, civil society is where those without power or critical mass can
build both and influence the majority. It serves as a conduit to the majority
system and a counterbalance to extreme positions. It also serves as an outlet
for those actions, rights, and views that may never be the priority of a
majority, but that are still valid, just, or beautiful. When it exists, civil
society offers an immune system for democracy—it is a critical factor in a
healthy system, and it requires its own maintenance. Immune systems exist to
protect and define—they are lines of defense that “allow organism[s] to persist
over time.”

Civil society always struggles to define its independence from
governments and markets. Civil society is shaped by laws and revenue streams,
but has different accountability mechanisms and relies on voluntary
participation. It is distinct from compulsory government rights and
obligations, and can often operate in ways that aren’t about financial profit.
But to describe the resulting space as truly independent is aspirational at
best. While universal human rights such as free expression, peaceable assembly,
and privacy provide its moral and philosophical underpinnings, civil society is
shaped by the laws of the country in question. These include regulations about
allowable sources of financing, public reporting, governance structures, and
defined spheres of activity. At the very least, the boundaries of civil society
in modern democracies are set by government action.

We are surrounded by big, fragile institutions. Global
companies, established political structures, and big nonprofits have purchased,
suppressed, or ignored the fluid and small alternatives surrounding them.
Fluid, networked alternatives exist and will continue to spawn. For some time
now, the fate of these alternatives was absorption by the top or diffusion with
limited impact. In each sector, there appears to be a notable change of
attitude in the way the small views the big. While corporate near-monopolies
and dominant political parties are still viewed by some as the natural and best
order of things (see, for example, tech executives and incumbent politicians),
the big players in each sector are rigidifying. I sense that this is matched by
a new attitude from the emergent, smaller, and more fluid groups who aspire to
challenge rather than to buttress.

This is where reminding ourselves of the dynamism of a social
economy within civil society is so important. It helps us to keep our eyes
simultaneously on emerging forms and on the relationships between them (the
nodes and the networks). It’s where we see tech-driven alternatives to party
politics, nonprofit or research-driven alternatives to corporate data
monopolies, and the crowdfunding of public services. What’s changed is not the level of dynamism among these
small, fluid, and cross-sector strategies. What’s new is the confrontational
nature they now bring. These alternatives don’t see themselves as mere fleas on
an elephant; rather, they challenge themselves to be the termites that topple
the houses.

The sense of failed systems can be seen in the rise of
autocrats where democracy once ruled, in the lived experience of a changed
climate even as a few powerful holdouts cling to their self-interested denials,
and in the return to prominence of racist or nationalist factions where they’d
been marginalized before. Threats about nuclear warheads catch people’s
attention. There is a pervasive sense of uncertainty.

Democracies depend on civil society. Closing civil society
often precedes a democracy’s shift into autocracy or chaos. Defending civil
society is not just an act of self-preservation. Protecting the rights and
interests of minority groups, and allowing space for collective action and
diverse beliefs, a cacophony of independent voices, and activities that yield
neither financial profit nor direct political power, are in the best interest of
elected political leaders and businesspeople.

The
language of the social economy helps us describea
diverse system of institutions and financial flows. The language of civil
society helps us articulate the purpose of the social economy and its role in
democratic systems. Digital civil society encompasses all the ways we
voluntarily use private resources for public benefit in the digital age.

The
hallmark feature of civil society in a democracy is its (at least,
theoretical) independence from governments and markets. Civil society is
meant to be a “third space” where we voluntarily come together on the
proverbial (or literal) park bench to take action as private citizens
for the public good. Our use of digital data and infrastructure blurs
these distinctions and complicates these relationships for a simple
reason: Most of “digital space” is owned or monitored by commercial
firms and government.

The conditions that support civil society’s independence have
been weakening for a long time and for many reasons. Support for research from
conflicted interests has tainted universities and nominally independent
research centers for years. News organizations sustaining themselves via ad and
subscription revenue are mostly a thing of the past. A small number of big donors
have been shown to shape political campaigns, legislative and legal strategies,
and the charitable nonprofit landscape. While crowdfunding and crowdsourcing get a
lot of press attention, the other end of the scale is shaped by large
concentrations of money from a few interests.

Today we must attempt to understand both the analog and
digital relationships between these actors. We must examine how these
relationships shift when organizations and individuals become dependent on
digital tools, data, and infrastructure. These dependencies do much more than
accelerate and expand the reach of individuals and organizations. They
introduce new forms of activism such as hacking and raise new questions about
authority and control between individuals and the companies that run the
digital platforms.
Most important, these dependencies bind traditionally independent civil society
organizations and activities closely to marketplaces and governments in complex
and problematic ways.

Our daily use of the
most basic tools of the digital age, such as cellular phones, email, and
networked printers, means that our activities are bounded by and reliant on the
rules and tools of the companies that make the gadgets and wire the world. As
we use these tools, our activities are also monitored by the governments that
surveil the digital spaces in which our tools operate. Our actions in this
space are shaped by the values of the companies that make the tools (even as
the companies seek to deny this) and by the way we respond to being watched by
both corporations and governments.

These digital dependencies significantly
challenge civil society’s independence. This matters to how individuals and
organizations work within the sector. And it matters to democracies that have
long relied on the “immune response” provided by a diverse and fractious space
where minority demands, rights, and ideas could thrive with some degree of
independence.

It is no coincidence that experts see signs that the space for
civil society is closing, that those monitoring Internet freedom see rising
threats, and that those monitoring the health of democracies fear for the
future. We can’t decouple these pieces. Efforts to “save democracy” will depend
on understanding how digital technologies have changed the relationships
between sectors. I discuss this in more depth in the section on digital
dependencies.

About me

Why is this blog called Philanthropy 2173?

This is a blog about the future. The year 2173 seems sufficiently far enough in the future to give us some perspective. As sure as we are of ourselves now, talking about the future - and making philanthropic investments - requires that we keep a sense of modesty and humor about what we are doing. Philanthropy is for the long-term - for the year 2173.