Socratic App

Take a picture of homework, and learn what you need

Would you recommend this product?
No reviews yet
Hey everyone, I’m Shreyans, one of the founders of Socratic. We're thrilled to share Socratic with you! This super short video is probably the easiest way to understand when, why, and how a student would use Socratic:
To help make learning easier for students, we do two new and challenging things: First, our AI lets you take a picture of a question, reads it, and classifies each question by the underlying concepts required to answer the question. In order to build this, educators had to manually classify hundreds of thousands of questions, and then we trained a computer to do the same thing. Second, we create our own high-quality mobile-friendly teaching content that we believe is simpler and easier to learn from than most of what exists on the Internet today. Our hope is that this technology makes learning more accessible for students who don’t have personal tutors or parents that can help with science homework. We believe this will be hugely valuable for students, and are excited to be at the beginning of this long journey.
Upvote (17)Share
Point the camera at your homework question, and Socratic will teach you how to answer it! The Socratic app combines cutting-edge Artificial Intelligence (AI) with amazing educational content to make learning on your phone easy. Trained on millions of sample homework questions, the app can accurately predict which concepts will help you solve your question. Socratic’s team of educators is creating highly-visual, jargon-free content to teach every important high school curriculum concept, and is curating the best online videos from sources like Khan Academy, Crash Course, and others. The Socratic app represents a huge improvement in how students learn on the Internet. And I love this team @cjpedregal @5hreyans (full disclosure: yes, we at Shasta are investors)
Upvote (13)Share
@nbt @cjpedregal @5hreyans saw this demo a few weeks ago. so cool!
Hey @cjpedregal & @5hreyans! Obviously Summize was a huge hit, but perhaps caught too much steam at initial launch. How is Socratic different?
Hey @andrewett, thanks for the question. Photo-as-input is definitely a similarity (one that we see in other education apps like PhotoMath, MathPix, Yup), and behind the scenes, there’s some AI in both. Most other things about us are quite different. Our goal is to teach students how to answer their questions, and to do that we’ve done two (fairly challenging) things. First, we’ve built our own AI classifier that takes a random question and classifies it by subject and by the specific underlying concepts required to answer that question. Educators categorized hundreds of thousands of questions into their underlying concepts, then we trained a computer to do the same thing. As far as we know, this doesn’t yet exist anywhere else. Second, we create our own teaching content. We’ve spent quite a while thinking about what style and structure of content is easiest for students to learn from on their phones, and are working with a community of educators to create this content for every relevant concept. Together, the experience we offer students is quite new, and (we think) hard to replicate.
Very Cool!
This is a neat idea, congrats on the launch. How do you make sure the 'quality' / relevance of the information you provide the students is highly accurate. In other words, how do you make sure you don't accidentally send them off on a tangent with a wrong concept/formula based on AI's interpretation of the question? Trust is obviously going to be a big factor here. Get a couple of inaccurate responses and trust/credibility will significantly decrease for future use.
Hey @sarthakgrover, that's a great question and fully agree with your point on trust. For our app to be useful to students, it needs to do 2 things: * classify a question into the correct topic * show useful and accurate content for that topic We are constantly measuring how well our classifier is doing by cross-validating it against the hundreds of thousands of questions our educators have manually labelled. This tells us which topics we do a good job of predicting and which ones need more work. When a topic needs work, we acquire more data (by manually classifying more questions) or do manual feature engineering. As for the accuracy of the content, that’s something that our community is constantly working to improve. Our Explainers are peer-reviewed by community members before going into the app and the content is iterated upon based on student feedback. We believe there is tremendous potential to better incorporate student feedback in the content creation process. While we are proud of the quality of our classifier and our content, we appreciate that this is just the beginning. In the same way that Google is still working on improving their ranking algorithms almost 20 years after launch, we don’t think either of these will ever be solved problems. Nor should they be - we should always strive to do a better job of helping students learn.