Where in the World is AI?

Visualization of where AI has been helpful and harmful

A visualization tool that highlights where AI has been helpful and harmful worldwide. On the map, you can filter by domains from health services to law enforcement, set a year range, and filter by categorization on whether AI has been helpful or harmful.
discussion
Would you recommend this product?
1 Review1.0/5
Martha Czernuszenko
Maker
Responsible AI @ AI Global
Hi Product Hunters! “How an Algorithm Blocked Kidney Transplants to Black Patients” — Wired "The Netherlands Is Becoming a Predictive Policing Hot Spot” — VICE “One Month, 500,000 Face Scans: How China Is Using A.I. to Profile a Minority” — The New York Times As the use of AI becomes more prevalent in our society, so do these shocking headlines. We should not just pay attention to how AI systems are designed or developed when we see a breaking headline. Rather, Responsible AI should be our first thought when we receive data, define a use case, or start a sprint Our interactive web visualization tool exploring stories on AI across the world to identify trends and start discussions on more trustworthy and responsible systems. On the map, you can filter by domains from health services to law enforcement, set a year range, and filter by categorization on whether AI has been helpful or harmful. We all have a role in helping build a more responsible future with AI. Whether this your first time hearing about Responsible AI or if you are a technologist or dedicated researcher , we hope to provide a visualization and dataset of Helpful & Harmful AI.
Share
Masatoshi NishimuraExploring AI writing for social media
Quite an ambitious project. It's nice. Splitting usecase into binary, good and bad. It makes it clear. At the same time, you may need to add clearly what makes the decision points. I also like that they all have news references. It'd be nice if there's a list view option as well. I do like the map visualization tho. Makes it look like AI is truly a global impact!
Share
Martha Czernuszenko
Maker
Responsible AI @ AI Global
Hi @massanishi, thank you so much for your feedback! If you are interested in our labeling process, you can learn more: https://towardsdatascience.com/w... . Currently, we host internal discussion groups and are opening external ones soon. If you click dataset & stats at the bottom, you can see all of our cases in a list view, was there a different format you were recommending? Thanks again :)
Share
Masatoshi NishimuraExploring AI writing for social media
@martha_czernuszenko Thanks for the link. Yes, I see that dataset now. Looks good! (I was checking out on mobile so I couldn't see it) I'm not still satisfactory with the article discussion ha. It can get controversial really quickly. But I think it can be really valuable just like how mediabiasfactcheck attempts ambitiously to split media into left and right. I hope you guys keep working on it. It'll be credible in a long span. Best!
Share
Martha Czernuszenko
Maker
Responsible AI @ AI Global
@massanishi Thanks for your feedback, appreciate it! Hopefully with these discussion groups, we can figure out the best way to label.
Share
Hello Makers, kudos! I love that this is a CC product and you have given disclaimer on what is considered good/bad. It is not yet optimized I guess for chrome or safari on iphone. I am having hard time playing with it
Share
Martha Czernuszenko
Maker
Responsible AI @ AI Global
Thanks @zurgun! Yes, we are not on mobile yet, but hopefully soon :) Thanks for the feedback!
Share