It’s still in an early stage; it’s not yet integrated with the cred from the GitHub. It has edges from topics to posts they contain, from posts and topics to their authors, from posts to the posts they reply to, and has likes. However, it doesn’t yet do mention or reference detection.

The cred has some interesting features. @LB scores really highly, especially in the most recent week. This is because LB has posted a few well-liked posts (the art posters, actually check them out), but hasn’t liked any other posts. The result is that the cred gets “stuck” in a loop between them and the posts they’ve authored, resulting in artificially high cred scores. This is even more pronounced in legacy mode:

As this image shows: in legacy mode, they have the second-highest cred, and the reasoning is circular: they have the most cred because they authored very high-cred topics, and their topics are very high-cred because they were authored by a high-cred user.

I spent a while discussing this issue in person with @mzargham. We have a good lever to influence this in the alpha parameter, which determines how quickly cred “resets” out of its current trajectory to explore the graph anew. Right now, we use a fixed alpha (0.05 at present) for the whole graph. We suspect that we should tweak the alpha to be on a per-node basis. You can think of this parameter as meaning “how much do we trust this node to direct cred according to its own links, versus having it reset to the distribution in general?” Then for user nodes with very few outlinks, we could set the alpha high, to prevent cred from getting stuck in tight loops like this.

Among other things, this experience also reinforces the need for a “cred laboratory” which makes it easy to run controlled experiments in seeing how cred flows on unusual graph structures.

In either case: I aim to have the Discourse plugin polished up and fully integrated by the end of the month, so we can start up the dogfooding experiment.

Among other things, this experience also reinforces the need for a “cred laboratory” which makes it easy to run controlled experiments in seeing how cred flows on unusual graph structures.

This sounds awesome! Sign me up How can I contribute?

Re everything else: you’re kind of describing the same problem Reddit has where a few memes and popular ideas dominate the “best” filter. Also, and this is really interesting, for something like a meme or pic just liking the post is enough to signal appreciation, but there’s often little reason to comment. On the other hand, with thought provoking or complex topics often a “like” is not sufficient to engage and people choose to express their response with a longer form written response. From there, how do you tell if the response is in agreement and collaborating to move the conversation forward, or is in disagreement and pushing back? Sentiment analysis? This would add quite a bit of complexity to the SourceCred protocol lol

Also, in order for the community to function things like giving out likes, engaging in conversations, and supporting high quality content need to benefit all parties involved. If it’s known that liking other people’s posts decreases one’s own cred AND increases someone else’s cred in the weekly/monthly score board, then people are going to be reluctant to do it. Humans are designed to maximize their own reward systems, whether we like it or not. Being selfish (not stupid or short sighted, but selfish - aka maximizing one’s own short term AND long term payout) is a dominant strategy that will always drive any system. Thus why the field of game theory exists and why positive sum cryptoeconomic mechanisms are so awesome

Furthermore, many people have many different communication styles! Someone might like a post just to show that they read it whereas someone else might only use likes for things they overwhelmingly support. While the stereotype is that this breaks down across gender norms (males being reserved and females being more expressive), in a diverse and dynamic community it affects everyone. It’s crucial that the algorithms are not biased in favor of one communication style over another and allow all participants to earn cred for contributing value to the community. What is value is a complex question… but it certainly goes deeper than the communication style and is about what is being said and not how.

you’re kind of describing the same problem Reddit has where a few memes and popular ideas dominate the “best” filter

I think PageRank is a little more robust than Reddit, because the “upvotes” (likes) are not weighted equally. Suppose that the SC community blows up, and a bunch of shitposters start creating memes and such, but the “core crew” is more focused on liking interesting ideas. And suppose there are 10x more shitposters than “core crew”. In a Reddit karma situation, the memes will float to the top. With cred/PageRank, the “core crew” likely has way more cred than the shitposters, so each of their likes will be worth many shitposters worth of likes. So the system should be better able to filter content, and be less susceptible to eternal september.

burrrata:

On the other hand, with thought provoking or complex topics often a “like” is not sufficient to engage and people choose to express their response with a longer form written response.

We have a lot of flexibility here. I liked your post, and I also replied to it, in a post that @-mentioned you by name, and extensively quotes your text. The @-mention and quoting are easy to detect programmatically, so we will add edges for them. Then we can decide, as a community: how do we want to weight likes, @-mentions, and quotes? By finding values that make sense for our community’s norms and expectations, we’ll get good results.

burrrata:

From there, how do you tell if the response is in agreement and collaborating to move the conversation forward, or is in disagreement and pushing back? Sentiment analysis? This would add quite a bit of complexity to the SourceCred protocol

Actually, it won’t add any complexity to the protocol, which is the great thing about protocols. Someone can add sentiment analysis without changing the protocol, but by adding a heuristic which sets the weights on edges based on sentiment analysis. I fully expect this will exist in the future.

burrrata:

If it’s known that liking other people’s posts decreases one’s own cred AND increases someone else’s cred in the weekly/monthly score board, then people are going to be reluctant to do it.

Yeah, I emphatically agree here. The fact that liking content will tend to reduce your scores right now is a bug. I would rather the semantics be something like: by default, your cred recycles into the graph at large, and by liking content, you can direct your cred there instead. But right now, by default your cred recycles into your node itself, inflating your cred until you issue some likes. That’s the bug, and we’ll fix it.

burrrata:

Someone might like a post just to show that they read it whereas someone else might only use likes for things they overwhelmingly support … It’s crucial that the algorithms are not biased in favor of one communication style over another and allow all participants to earn cred for contributing value to the community.

These are important questions. I don’t know that it’s possible for SourceCred to be unbiased–what would that mean, exactly? Since there is no ground truth for “value” to compare against, we’re going to be left with an intersubjective, constructed approximation. The approximation will always be imperfect and flawed, and the flaws will benefit some and disadvantage others.

If we can’t make a perfectly unbiased algorithm, we can at least make the tools so that communities using SourceCred can make informed decisions about what biases may be present, and constantly work to improve it, so that it becomes fairer and less biased over time. (Much like we are doing with society itself, at least in theory.) My hope is that most of these improvement won’t require changes to the core SourceCred protocol, but will involve changing the weights, plugins, and norms and practices within the communities affected.

As an example, before adding the Discourse plugin, SourceCred is very biased in favor of people who are active on GitHub, vs active in other ways. Now, it will be biased in favor of people who participate on GitHub or Discourse, but against people who contribute through Discord. To some extent, this bias is desirable, as contributions on Discourse are generally more valuable (because they leave a permanent record that others can learn from and contribute to). But we’ll likely mitigate this bias in the future by adding a Discord plugin. And so forth.

We can riff off the preamble to the constitution a bit. “In order to form a more perfect union”; “in order to devise a less biased algorithm”…

burrrata:

Among other things, this experience also reinforces the need for a “cred laboratory” which makes it easy to run controlled experiments in seeing how cred flows on unusual graph structures.

This sounds awesome! Sign me up How can I contribute?

Well, I imagine a web app that allows the user to interactively create a SourceCred graph and play with the structure/weights, then run PageRank. Researchers could then use that webapp to create various interesting graphs and we can use them to validate whether the cred flows match our intuitive sense of what should happen.

If you want to take a stab at this, I can write up some notes on how to get started, and what parts of the code base are relevant.

Loving the questions @burrrata and well thought out answers @decentralion. It seems the general “philosophical” framework/attitude is good (continually striving for “a more perfect union”, while realizing the flaws, continually working towards progress).

Discourse results look pretty good. It does have some abberations (like @LB’s inflated scores), but intuitively it feels in the right neighborhood.

I will note that my attitude (so far) in this game, is to not pay attention to cred scores, and like/comment/etc. as if I’m already in a perfect system. This is how I generally act in the Decred DAO as well, where real money is at stake–though not directly as we’re experimenting with here. Is this a good approach? Am I bravely “being the change I want to see in the world”, thereby nudging the system in the right direction? Or am I being lazy and irresponsible, and my thinking more critically about what posts I like would be valuable “work” that improves the system?

As for the larger issues around value, as I mention in my reply on another post, I think we could see various “Proof-of-X” type heuristics that can be used to define and gauge various community values. For instance, Proof-of-Awareness-of-Social-Issue-Y, say. Though that approach could have unintended side effects. I’m also tripping out here on the idea that individual interactions will create said issues structures in the graph organically, and that these can also be amplified organically through normal interactions. For instance, if I like every post that expresses value Z, will not more value Z show up if we devise some metric to measure it? I’ve been thinking generally how, even though there will need to be a governance layer, and voting schemes, etc., in a way, every interaction you have in SC is an impromptu mini vote. A statement on numerous issues at once. Another way I’m thinking about this is that every interaction is a transaction, with every interaction having its own mini unique market. Or several. In this way, “price discovery” could be happening organically? So while we should definitely look at this on higher levels of abstraction (definable project/society-wide issues), we should also be cognizant of how higher-level structures are being created organically at the protocol layer, like by like, and not accidentally do harm (or replicate work).

This also makes me think of “dark metrics”, or someone using closed-source proprietary metrics to inform their likes/comments/etc. For instance, if I’m trying to promote a product or company, I have a bot that, in addition to my organic activity, likes any post mentioning that product or company. Or I’m an activist, and I have a plugin that likes things based on identity characteristics (gender, race, nationality, etc.). In a sense, this is perfectly “legal” and expected behavior.

Edit: added below:

Ooooohh…Or let’s imagine someone using some metric to gauge something desirable (e.g. profit, development of a certain feature, some social value, etc.), then mapping that metric to nodes that increased that metric. They could then send cred directly to those nodes to increase their influence. This could be akin to an “incentive compiler” (term credit @decentralion) whereby high-level instructions are broken down to a more granular set of instructions. Here I think a community having its own currency perhaps becomes important. If a project shares the same currency as all other projects (say Grain), then a deep pocketed attacker could come in and co-opt a community by compiling incentives according to their metric, not the community’s (perhaps this is even an “evil” metric, such as sowing division, or sabotaging a product). If contributors are rewarded in their own community’s currency, an attacker would have to buy up that currency on the open market, giving money (and therefor power) to the community it was trying to attack.

Also, that’s awesome that the PageRank/SourceCred alg is more robust than a naive up/down vote system (not sure exactly what system Reddit uses tho). It would be great to see more applications integrating this, esp in a social context. While cred is obviously awesome for tracking open source dev related contributions, it could really be used to track contributions to any community including social networks. Currently, and sadly, almost every social network in the world sucks right now because… well because of many things… but having a better mechanism for recognizing quality contributions (however the community defines “quality”) would be awesome (esp if that gave governance weight to contributors in a DAO type setting). We’re currently kind of working on this with Daonuts (see about page), but something like SourceCred would really really improve the model greatly

decentralion:

Then we can decide, as a community: how do we want to weight likes, @-mentions, and quotes? By finding values that make sense for our community’s norms and expectations, we’ll get good results.

Will you get “good” results tho, or will you get more people doing the things that get them more points?

Atm the only “points” (at least in my head) is that I have ideas and want to brainstorm on stuff to make the internets better. IF I was optimizing for other metrics tho… I might choose to express myself much differently. It’s like someone who posts on instagram because they want to vs someone posting for likes. Originally it was all people expressing themselves, now it’s almost all people trying to game the algorithm to promote their personal brand. While every human in the system is optimizing for their own personal rewards, the system itself shapes the community greatly. IF the community can shape itself tho (vs FB/Insta doing it), then… well afaik no one knows because we haven’t tried it yet! Excited to see how it turns out and participate, but there’s no guarantee of “good” results.

decentralion:

Someone can add sentiment analysis without changing the protocol, but by adding a heuristic which sets the weights on edges based on sentiment analysis.

This is VERY cool! So it’s not just an algorithm, but it’s a platform that people can build apps/plugins for?!

decentralion:

f we can’t make a perfectly unbiased algorithm, we can at least make the tools so that communities using SourceCred can make informed decisions about what biases may be present, and constantly work to improve it, so that it becomes fairer and less biased over time.

We’re monkeys that wear clothes so everything we think and do is biased to a degree and it’s all relative. The most important part is that the algorithms determining who gets cred and why are open so that we can analyze, understand, and shape them together as a community.

decentralion:

Well, I imagine a web app that allows the user to interactively create a SourceCred graph and play with the structure/weights, then run PageRank. Researchers could then use that webapp to create various interesting graphs and we can use them to validate whether the cred flows match our intuitive sense of what should happen.

Running auto-ml on this in order to optimize for various objectives would be crazy… I mean I bet that’s what every major platform does already with their content “discovery” algorithms, but to open source that would be amazing. Could even run a tournament like Numerai, but where people submit parameters that make the graph more like whatever the community is optimizing for (and thus earn rewards for doing so). This way, as a community evolves and/or learns to game the system there would always be new adjustments coming in to reshape the graph towards whatever the community is optimizing for. This explanation assumes an understanding of Numerai, so does that make sense or should try to explain it better?

decentralion:

If you want to take a stab at this, I can write up some notes on how to get started, and what parts of the code base are relevant.

I would LOVE to. I’m super busy rn tho building DAOs and whatnot so I dunno how much time I could devote to it, but it’s something that would be incredibly interesting to explore

s_ben:

I will note that my attitude (so far) in this game, is to not pay attention to cred scores, and like/comment/etc. as if I’m already in a perfect system .

I mean… you do you, but altruism is not a winning strategy (unless you’re in a repeated game with > 5% altruistic players, but even then you need to be playing tit4tat). I like the idea of stress testing and breaking the game to see where it needs improvement lol

s_ben:

every interaction is a transaction , with every interaction having its own mini unique market. Or several. In this way, “price discovery” could be happening organically

This seems really really interesting and a solid avenue of research. “Price” is just an abstract number measuring “value”, and more often perceived value. What matters is that there’s something being measured and market participants take actions based on their perception of that. We’re spending time/energy writing posts and curating content. If you post a thing, it’s like a sell order and if I reply, it’s like a buy order. You spent time to make a thing and I agreed to spend time replying to and engaging with that thing. When you add cred to that, it measures what we already know.

Example: if Vitalik Buterin responds to or reposts something on Twitter, people pay attention because Vitalik’s thoughts are high value and there’s a high chance that whatever he’s doing is worth paying attention to. This is implied, but the only measure we have right now is followers, likes, retweets, and “impressions.” Being able to map Cred to that would create an open market where participants not only earn social points, but governance weight and financial points as well. This could change (improve) the way humans coordinate and cooperate at scale!

If the algorithms are public and we understand them, then the impact will be positive. If the algorithms are easy to manipulate and/or only serve the interests of a few (advertising dollars, etc…) then the effect will be largely negative as we currently see today. The important part here is making sure that everyone can participate and understand the system, because ultimately they’ll have to shape it as they go.

This creates a meta market of data scientists who can submit tweaked parameters to the model that then will distribute cred (governance/money) to people within a network (which is itself a market!)

Does all that make sense? It’s clear af in my head rn, but I also feel like I might be rambling so please lemme know if this needs clarification!

s_ben:

This also makes me think of “dark metrics”, or someone using closed-source proprietary metrics to inform their likes/comments/etc. For instance, if I’m trying to promote a product or company, I have a bot that, in addition to my organic activity, likes any post mentioning that product or company. Or I’m an activist, and I have a plugin that likes things based on identity characteristics (gender, race, nationality, etc.). In a sense, this is perfectly “legal” and expected behavior.

This is huge. I dunno how to prevent this, but having an open market that rewards people for submitting improvements to the SourceCred alg/parameters could help.

also, initially bots will have low cred to begin with and thus be less influential, but… I’ve been following subredditNN for a while now and the bots are getting better and better at an alarming rate (it’s a Reddit community of 100% bot generated content. Pictures, memes, text posts, everything generated by various types of neural networks). Maybe if a bot is actually producing stuff people like then it should get points for that, but then a data scientist could create a bunch of bots then use their cred to vote on things. Feature or bug? I dunno…

s_ben:

If contributors are rewarded in their own community’s currency, an attacker would have to buy up that currency on the open market, giving money (and therefor power) to the community it was trying to attack.

Not sure if I follow the first half of this example, but I definitely follow this part. That’s the beauty of Proof of Stake or bonding curve systems. They provide cryptoeconomic security simply based on the fact that it becomes exponentially more expensive to attack/buy a network the more you try!

EDIT: after sitting with all this for a bit and letting it stir around in the back of my mind, I think that the main point I’m trying to get at is:

at the heart of any social network/platform is a graph that measures the value of users and content

“value” is whatever reward function that network/platform is trying to optimize for

the social network then ranks users and content based on what is perceived to be valuable for it’s objectives (not necessarily the objectives/values of users on that network)

valuable users/content are then promoted, while less valuable ones are ignored, shadow banned, or banned outright

this is happening right now on digital platforms like Facebook/Insta, but also in meat space with China’s social credit system

in both cases the problem is not measuring value, but the fact that participants in the network are not able to influence their destiny within that network beyond optimizing for the metrics that a single opaque entity dictates

the killer app of blockchains is permissionless innovation, which is another way of saying open markets and competition that puts users in control and allows them to be participants in a network vs a product of it

it is essential that participants in SourceCred networks/communities can easily understand and contribute to the governance (optimization) of the SourceCred algorithm that determines their reputation and financial gain in that network/community