How trustful do you guys find the answers/opinions generated by modern chatGPT services (GPT-4o mini, Claude 3 Haiku, Llama 3.1 70B, Mixtral 8x7B) ?
I don’t directly mess with them. Instead, i use duck.ai chat to use them.
I find those answers generated by these chat models are insufficient. These are more focused on satisfying users that providing genuine answers on aome topics like privacy, surveilence, politics, etc…
What you guys think from your usage of these models ? (If you used it) And what you guys think as a solution for it ? Building a more privacy focused and open model is possible by the privacy community ?
Perplexity seems to do a pretty good job, whereas Copilot makes stuff up all the time.
All LLMs I’ve tested had a tendency to agree with my delusions and misconceptions, so you have to be very cautious not to ask loaded questions. If you start misleading the LLM, it will go with the flow and give you a wrong answer.
Copilot and chatGPT prefer to avoid PR disasters, but Mistral has no issues with sensitive topics. Mistral doesn’t really seem to have much opinions about anything so you can dive into any topic you like. The other LLMs do have clear opinions and lines they won’t cross.
Mistral doesn’t really seem to have much opinions about anything so you can dive into any topic you like. 😅😁