Some feedback: out of the 10 articles that I saw, 9 of them were all left liberal biased. I would intro the tool by trying to provide a better mix. A tool of revealing bias shouldn't be so biased, in other words :P
As a user, I feel like there would be a lot of friction in having to consult with a bot every time I wanted to view an article's bias. This product would be much better as a layer of metadata on top of an article, for example, a chrome plugin. The "bot" would become smarter much faster this way. Also, the usage / metrics / retention would be astronomically higher this way.
Pretty cool though, I think tools like this will definitely grow in popularity, and hopefully, can let people know when they are engaging in confirmation bias (mostly liberals :P).
This is pretty cool :D
@kurtybot thanks for the feedback Kurt! I think you're referencing the intro which pulls the top news stories from Google News. It's pulling the most popular stories regardless of bias. My hunch is that with the recent news from the Trump administration, left leaning sources are getting more traction (thus the 9/10 of top stories being left biased).
We can change this intro to be half and half left and right, but then we run into the problem of artificially increasing the relevance of less popular news stories with either leaning (according to google news). I'd love to hear your thoughts on this.
You can send NewsBot any article via a link.
@kurtybot : totally agree with your feedback - especially the chrome plugin idea. eliminate the friction by fully integrating the value of the algorithm into the user's already-existing news reading experience. that would be a great next-step if the bot mvp validates the problem/solution
I've known @theashbhat for 3 years now and I'm pumped to finally be working on a project with him.
Obviously, there's been lots of fake news and political drama happening right now so we decided to build a Facebook bot that has one of the best models in predicting political bias as well as summarizing articles (85% on our sample set but we've changed and improved the model since then).
Play with it yourself and let us know what you think! Doesn't require downloading anything.
Congrats on the launch @ali_wetrill@theashbhat@tzhongg ππ
It's a fantastic way to use a chatbot to help reduce fake news and the spread of fake information.
As a French, I don't know very well the bias of every sources so that's also helpful for that.
With the French elections recently, your chatbot would have been very useful against fake newsπ
As a chatbot maker, i would love to learn more about the tools you used to build it if it's not secret ? :)
@mrcalexandre yea! Our magic is mostly ML classifiers that we built from scratch. UC Berkeley has an amazing ML course called CS189, and I took Andrew Ng's coursera course (Stanford's equivalent) which is also very good. Outside of that, no real additional tools. It's a pretty straightforward server that gets requests from Facebook and responds via webhooks! π
I really like this product. It's a great idea to show whether sources are trusty, but can you get more fine grained? Perhaps who the author is and whether he or she is reliable? That may be more helpful to promote accurate and unbiased reporting
@jleodaniel we have multiple biases to compare the user input to. For example the average domain bias (average of the many articles published), and our classifier's prediction. If it's too far off base and isolated it isn't factored into the rating. We've also found that our users tend to falsely report fake news more often than political leaning. We're also monitoring the user input as well. :)
A straight left/right bias axis is way too simplistic for media bias. There are roughly 8 major types of bias in news (arguably more). I like that you're measuring it against a crowd (I think?), but I also agree with the user who suggested this would suit a metadata plugin better than a chat bot.
@rob_wood1 we created a ML classifier for political text and trained it on around 16,000 articles. It classifies whether an article is political and then the political leaning. If it's not political, the bot states it. We're also doing things like summaries. I'd love to hear more about about the different types of bias!
@theashbhat@rob_wood1 : it seems tricky to validate a solution to 2 problems at once - that people want to know the bias in a political post AND that people want political posts summarized. out of curiosity, what made you decide to tackle both problems at once?
Siiiickββreminds me of some research I read a couple years back that correlated pretty simple bag-of-words analysis with the political leaning (apparent in sponsorship) of legislation. I wonder if there are applications of this in governance.
@theashbhat I will determine myself if it is fake news. I don't need someone to tell me if it is fake news. I've seen legit articles that have been classified as 'fake news' because of keywords and (unfortunately) political reasons. It is the readers responsibility to be active.
In a time where the lack of trust in the news has created a void between the people and the media, this unique product fills that void by giving the people a tool to help filter out phony content and instill their trust in the media again. The chat interface is intuitive and its output is very accurate. I'm overall very impressed and hopeful for the future direction of the this product!
Replies
Squirrel
Command
noplace
noplace
Command
Hire a BuzzFeeder
Command
noplace
Hire a BuzzFeeder
Command
Command
Ponder
Command
Levels.fyi
Command
Command
Duet Display
Command
Command
Command
Command
Commaful
Command
Command
Command