On Collective Debate users take a test of their morality, then debate an artificial agent on a controversial claim. Users indicate how much they agree with the claim, then they exchange arguments with the agent. After the debate, users re-evaluate their position. The agent is trained to make arguments that nudge the user to become more moderate.

@greaterthan1000 @ems_hodge Hey Sam, I think I've fixed the slider issue. Let me know if you give it another try. Thanks again for the feedback.
Upvote (1)Share
I really want to like this. Like really want to like it. I answered the questions, was excited to begin. I was unable to progress past the first claim. The slider won't slide. =/
@greaterthan1000 @greenbeandou can you help? Looks like the slider is stuck.
Upvote (1)Share
@greaterthan1000 oh shoot... I'm really sorry about that... which browser / OS are you using? thanks for giving it a try anyway :)
Upvote (1)Share
Why is it desirable to be more moderate?
@anthonyadams Hey Anthony - thanks for the question. The premise behind optimizing for becoming more moderate is that on certain political / moral questions there is no right answer, so in a certain sense the best position to take is to neither strongly agree nor disagree, which would mean that one sees both sides of the issue. This idea is based on Dan Ariely's research on the role that "confident moderates" play in helping groups make better decisions. There's a cool TED talk on his findings here: https://www.ted.com/talks/marian...
@greenbeandou thanks for the cool talk, I enjoyed it. It will help with a project we are working on to create a collectively built and run island for visionary minds. I'll consider more deeply the role of the moderate. I'll spend some more time with your app.