That's the name of the article, and I wholeheartedly agree. https://link.springer.com/article/10.1007/s10676-024-09775-5 Another article on how ChatGPT's performance on the bar exam might have been exaggerated. https://nysba.org/why-gpt-4s-score-on-the-bar-exam-may-not-be-so-impressive/
Thank you, sanantone. AI is great at writing middling-quality text with unreliable relationship to actual facts, and remixing existing art in ways that shows no understanding of its subject matter. Everything beyond that is hype. "AI accidentally made me believe in the concept of a human soul by showing me what art looks like without it." - Natasha Mazumdar
Hello! I would say that everyone is entitled to their opinion, but it's important to consider the fact that ChatGPT is a cutting-edge technology that has been trained on vast amounts of data to generate human-like responses. While it may not be perfect, it is constantly improving and has the potential to be a valuable tool for a variety of applications. It's always important to approach new technologies with an open mind and give them a fair chance before forming a judgment. (Answer generated by ChatGPT. Which sounds for me as a native French speaker VERY strange [you can read ChatGPT as "Cat, I have farted"]). Best regards, Mac Juli
I used AI to help me summarize the paper, and the following is what I get: Large language models like ChatGPT can be seen as purveyors of "bullshit," not lying or hallucinating, due to their design focusing on generating text that mimics human speech rather than conveying accurate information. Key Takeaways Large language models are described as engaging in "bullshit" due to their indifference to truth and intent to produce convincing text over conveying accurate information. ChatGPT and similar models are designed to imitate human speech without concern for truth, potentially misleading audiences. Describing AI inaccuracies as "hallucinations" is misleading and does not align with the nature of their outputs.
For now. I have a run-of-the-mill, late model Mazda. It gets 255 horsepower on premium fuel with its turbo 4-cylinder engine. The first Model T Ford mass produced got 20 horsepower. Should we have just pitched it for its lackluster performance? AI is here to stay. Humans had better get on top of this and be ready to redesign social structures, particularly in terms of work and pay. Since the Industrial Revolution, we've been in steep competition with each other, measured by how much pay we could wrangle from our employers. With AI, there might not be enough work to go around for humans. For those who cannot work--because there is no work for them to do--are we to let them starve? Sure, there will always be room for the most creative. But what about all those people who trade their low-skill labor and loyalty for a decent living? They've been under threat for decades, but AI may be the thing that wipes out their jobs completely. We may be faced with the need to redefine work, worth, and how one takes a productive place in our society.
It will definitely improve, but its current abilities are overstated to the point that college students and non-students are solely relying on it for information believing that it's accurate. Side note: At this moment, the AI update on my phone is working my nerves because the grammar and spelling corrections and predictive text are worse than what we previously had.
ChatGPT is great for brainstorming and recipes! I have used it several times when I am not sure what to make for dinner. I can start with a general idea based on what I have… like… give me a recipe with chicken and rice that I can make in a skillet. It will spit out a recipe, then I can tweak it by saying something like, I can not use milk or egg… or i don’t have thyme. So far, it has given me wonderful new recipes. People who are trying to get ChatGPT to earn their degree for them are doing it all wrong, That’s not using the tool correctly, that’s trying to get something for nothing.
Ironically, this was an early selling point of personal computers back in the late 1970s. You'd be able to track your ingredients and figure out what you could make from them. It was nonsense, of course. And now it's not.
I have used CGPT for about two years now and it has progressively improved. Hallucinations are fewer, particularly with access to the internet. As someone else mentioned, it is here to stay - and it will continue to get better. Will it replace some jobs? It will certainly mean a reduction of some, but we are not at the elimination stage just yet. It can be trained and it can be quite accurate, but it absolutely requires human oversight. Having used it, it has given me more good than bad data, has drafted emails for me, reworded phrases and paragraphs, and much more. It does a great job in creating outlines, it can speak to any topic imaginable, etc. The benefits of AI outweigh the drawbacks, but it needs to be used correctly. Like Vicki said, it is a -tool-, and you either learn to use it, or face being replaced by someone who does.
One of the tools I use at work and for personal needs, Since mastering the prompts, the tool is extremely helpful and increased my productivity. But its one of range of AI tools I have been increasingly using. It's constantly improving.
Hello! Paper made by myself at UCAM during three days: 72 points, passed. Resubmitted for fun for the same task a paper made by chatGPT in 60 seconds: 80 points. And that was chatGPT 3.5! Best regards, Mac Juli
I used ChatGPT to assist me with automation of a time consuming task at work. The generated y ChatGPT automation scripts needed little tweaking and deployed in UAT and later Production environments. Saved me alot of time. And now the task takes minutes instead of hours.
Hey, everyone! Did anyone see that TikTok where the guy asks ChatGPT how many Rs are in the word “strawberry”? ChatGPT kept insisting that it was two. As a teacher, ChatGPT can be really helpful in generating essay prompts and discussion questions, but I don’t think it can ever replace or replicate the human mind.
I completed Prompt Engineering classes using ChatGPT. I'm using ChatGPT on a daily basis, it helps with completing tasks fast, it helps with analytics, Large Language Modules like ChatGPT helping people to make a living. Is it perfect, no. Do English teachers hate it? YES. But it's a fantastic tool, if one uses it properly. Our C level executives had me do a few demos of projects that were completed with the help of such technologies. And also a good friend of mine, making a living using such tools to write children books.
Many English teachers hate it, Kids abuse it, extra work to check plagiarized work, or homework written by AI etc.
As an educator, I can think of an easy solution: have your students write on paper in class. At least a few times. You'll have honest writing samples and will know what your students are capable of.
We should always be thinking about how do we take the response and use it to inform a next question for the conversation or a next statement for the conversation, or how do we give it feedback on what it did well or what it didn't do well. That's how we get the really useful products. That's how we go from thinking of it as a hammer where we strike once it doesn't give us what as we want and we throw on the floor that mindset is wrong. We want to go to the mindset of it's a hammer. We're going to have to chisel away at the rock to get the really beautiful outputs of it. If we're not continuing the conversation, continually asking follow-up questions, problem-solve in the conversation and trying to move around roadblocks, taking what we're being given and giving it different shapes and formats that may be useful to us. We're really missing the underlying power and capabilities of these large language models.