Where is ChatGPT's limit? How can it create a unique barrier when developing native AI applications?
Kane
26 replies
If the unique feature of native AI applications is just "using AI interfaces", do they still matter?
Replies
Kenny Hawkins@kenny_hawkins
ChatGPT just works and now you can plug-n-play with the API!
And though there are many tools that have the ability to add your content, it's still very localized and doesn't encompass other environments or services.
We need something that is like co-pilot, but mimics the concurrency of an Apple ecosystem (i.e. handoff between devices) and is nearly open source so it can integrate with everything. It's not a Jarvis quite yet π
Share
The moment chatGPT starts to charge, it means that its ceiling has reached
Survol
Contextualization, this is why the real value of AI only shines when linked with frameworks such as Langchain
When you can truly extract value from your own data
Survol
@blueeon I made a whole post on my page but basically think of being able to overcome GPT lack of context and make it query data on the spot and think logically about the actions it takes to answer a question
Thanks to those frameworks you can pass it fresh data, documents like PDF, notion etc
This is probably the most straightforward showcase https://m.youtube.com/watch?v=kY...
It would not go further than just assisting the existing services for now. Co-pilot might be the best example. GPT It is trained on 170+ billion parameters for which they used 10.000 Nvidia GPUs just to train the model. I don't see it anytime soon going multimodal as we are limited with hardware. But it still matter...
Chat GPT limit lies in 1) the Attention window, which is some thousands of characters afaik and 2) it fails to elaborate and modify constrained hierarchical data correctly, in the example source code tree. These problems cannot be overcome in the current stage of Tech.
@oleksandr_koreniuk In fact, I have seen at least two methods on GitHub to solve the input length problem: one is to build a side vector database to store knowledge, and bring the data related to the request each time; the other is to enter in segments, summarizing the previous content and adding the current content each time. I believe that the length problem will be solved. πͺ
@oleksandr_koreniuk Your statement is not wrong. The two methods i mentioned have optimized the input and do not require complete knowledge to obtain answers in a Q&A scenario. There is a technique called Embeddings, and OpenAI provides an API for Embeddings, which can be used to create side vector databases. Another practical example is Stripe, which used GPT-4 to enhance the natural language search capability of its development documentation: https://docs.google.com/forms/d/.... I hope this can be helpful for your reference.
I think AI is currently best suited for scenarios that require accuracy below 95%. For scenarios that require higher precision, a co-pilot approach may be better. Having expertise in a specific field's data and knowledge can help create a barrier for the application. Hunter, you got any better ideas?
@blueeon I agree too, specialization is still a barrier, the subjective factor is still needed and a roadblock, for now. As an AI is supposed to learn from specialist, I use it as a support or as a way to learn more things and facts, still no replacing my own words, as a copy - paste, is useful but still not that accurate, I'm still confidence with my words, style of writing and empathy... for now.. ;)
@alexpsi67 You're right, buddy. We should embrace it, but also be aware of its limitations and distinguish what still needs to be done by humans. π
ChatGPT has its limitations, particularly when it comes to complex decision-making and creativity. However, it can still provide valuable insights and information to developers working on native AI applications.
On a separate note, We'd launched Patr and ranked #3 product of the day! We are glad to hear that the discussions with the community helped us gain valuable insights for our launch.
Thank you
.
@shrilatha_shripathi Okay, I'll go study your product right now.
I think there is definitely space for AI interfaces in the future. But the main issue is that with GPT-4 and other AI interfaces, you have all these apps that just can't compete with OpenAI itself.
So you really have to offer a distinct value proposition.
Though image generation is a much less crowded space, which is why we're hoping to integrate GPT-4 with evoke-app.com
@richard_gao2 I hope your evoke-app.com keeps getting better and better.
FOOOOOD
As an AI language model, ChatGPT's limit is solely dependent on its training data and its underlying algorithms. However, it can create a unique barrier when developing native AI applications due to its strength in natural language processing (NLP).
NLP is a challenging field in AI, and developing NLP applications requires large amounts of training data and computational power, making it difficult for smaller organizations or individuals to develop robust AI applications independently. ChatGPT's ability to process language in a human-like manner and generate text that closely resembles that of humans can be a significant boon to developers.
In terms of creating a unique barrier, ChatGPT's ability to generate human-like text could impact the way information is disseminated and how individuals perceive information. Developers need to ensure that the information being generated is factual, unbiased, and does not promote disinformation or harmful ideologies.
Therefore, while ChatGPT can be a valuable tool in developing native AI applications, developers must be mindful of the potential impact that AI-generated text can have on society and ensure that ethical considerations are taken into account.
Conversa - Videos That Talk back
GPT is well designed as an excellent source of information, but it does not have emotional intelligence which needs to be build into any assessment of the role it is fulfilling.