AI is making it easier than ever for students to cheat.

Discussion in 'Off-Topic Discussions' started by Lerner, Dec 13, 2022.

Loading...
  1. Lerner

    Lerner Well-Known Member

    Last edited: Dec 13, 2022
  2. Rich Douglas

    Rich Douglas Well-Known Member

    I played with ChatGPT for a little bit. Yes, it generates text. But it misses a lot of context, doesn't access the internet (so it is post-2019 dumb), and doesn't give citations.

    It might be necessary for instructors to design assessments that actually measure learning instead of measuring content.

    Google has one (LaMDA) it's keeping under wraps, reportedly because they cannot figure out how to monetize it (yet). It's supposed to blow away every other one out there.
     
    Rachel83az and JBjunior like this.
  3. Lerner

    Lerner Well-Known Member

    “LaMDA gained widespread attention when Google engineer Blake Lemoine made claims that the chatbot had become sentient.” - how do they define sentient??
     
  4. Johann

    Johann Well-Known Member

    I think that's usually by the thing being able to ace a Turing Test. Lemoine claimed it had become sentient - but it hadn't. Google shut the claims down. Doesn't matter - however you define sentient - LaMDA isn't. This is just another PARTIAL story to create sensation or fear. :( Don't we have enough fear?

    https://en.wikipedia.org/wiki/LaMDA
    https://www.cnn.com/2022/06/13/tech/google-ai-not-sentient/index.html
    ...and many more.
     
    Last edited: Dec 14, 2022
  5. Lerner

    Lerner Well-Known Member

  6. AsianStew

    AsianStew Moderator Staff Member

    Hmm, I haven't used ChatGPT, just by reading the name I thought it would be one of those online popup windows for chatting in case the user is looking for online help. I think having a chatbot online would be easy to implement, something that would answer simple questions with simple answers or at least direct them to whatever they're looking for on the website.
     
  7. Johann

    Johann Well-Known Member

    Yes it IS easy to implement. I think EVERY degree mill in the entire world has one of those on its website. You can even get a FREE chatbot and customize it with no coding knowledge required. That's not really AI - it's more like Artificial Stupidity. It knows to do the same thing, again and again. AI is much harder. AI has to "learn" to configure new answers.
     
    Last edited: Dec 14, 2022
  8. datby98

    datby98 Active Member

    Although I've heard of ChatGPT and read many hilarious posts about its delivery, I never try it myself.

    However, honestly, I am taking advantage of other types of AI techniques in completing my assignments. I confess that I often use some websites to check my writing before submission and see if those tools could give me better rephrasing or paraphrasing.
    I understand my posts or assignments could read odd or perhaps be constructed in a weird sequence in the eyes of those whose mother language is English. With the help of AI, I am mimicking their tones, logic, reasoning, slang, etc., to express myself more precisely in a relatively comfortable and familiar manner for the readers.
     
    SweetSecret likes this.
  9. SweetSecret

    SweetSecret Well-Known Member

    Datby98, not actually seems like a very valid reason to use it that I could go for. I sometimes find myself using the same words for emphasis numerous times within a paragraph. Maybe using AI could help me with the editing portion to get those corrected faster.

    I did just see something today about a guy who used ChatGPT and Midjourney to write an illustrate a children's book over the weekend. It's quite impressive but he said it did still take a few hours. Either way that could seriously help drive the economy.

    In another instance I had seen a guy use AI to create animated videos with 3D models. Well it wasn't perfect it was certainly showing the capabilities of AI.
     
  10. Rachel83az

    Rachel83az Well-Known Member

    I signed up for ChatGPT to see what all the buzz is about. I got some pretty funny stuff when I gave it some absurd requests.

    For more serious stuff, it seems like a good way to brainstorm if you're stuck on something. I asked it for some reading recommendations on a specific subject. It told me it couldn't help me, then still proceeded to give me a short list of actual book titles that I hadn't come across before.

    As long as you're not using it to actually complete your assignments, it seems quite similar to having a classmate to bounce ideas off. If you use AI to 100% write your stuff, it's still at the point where an actual reading of your paper will turn up issues. That may or may not be ironed out in the near future. We'll have to see.
     
  11. Rich Douglas

    Rich Douglas Well-Known Member

    People assessing knowledge and certifying it to award degrees, certifications, and certificates will have to adjust to changing conditions. And they will. They always have.

    When I grew up, if you wanted to take, say, a CLEP test, you sat in a room with your test booklet and pencil under the observation of a proctor. But as technology--and cheating--got better, test proctoring became more sophisticated, too. When I sat for an industry certification a few years ago, I had to leave all my belongings in a locker and submit to a pat down before being allowed into a room--a room I couldn't leave until I was finished with the test. The only things I could have at the computer desk (the test was done on a PC) were my ID, a couple of pieces of scratch paper (which they provided), a pencil (ditto) and my locker key. And those objects had to stay on the desk in sight. I used to work in a CIA secured facility and it was easier to go in and out of it than this proctoring room.

    So, what to do about AI? Change the measurement. For example, a closed-book test isn't an option for an online, asynchronous class. So, don't use it. In the case of AI, it is likely that, like plagiarism, there will be detection methods. But hey, we already have people submitting papers written for them by others, so what are we doing about that? There are some options out there.

    I could usually tell when a student submitted a paper written by someone else (it happened once in a while) because the voice in that paper didn't resemble what I'd been hearing from that student all semester long. Or, often, the help was simply wrong and hurt the paper.

    Here's another thing: I didn't worry about cheating. Seriously, and I knew some students got away with it (often undetected). Why? Why not worry? First, because no system is perfect. Some people are going to be able to get over or past whatever restrictions are put on them. But second, and way more important, life usually has a way of dealing with short-cutters like that. Yeah, they may cheat and get the grade--or degree, even--but in the long run, they won't get as far as the might if they played things straight. Remember, we're talking about adults here, not children (who tend to be more opportunistic in their earlier development). This is how you're going to live your life? Go ahead.

    How do we know that people know? In your everyday life, how do you know that someone knows what they're talking about and/or doing? One of two ways, usually. In some very narrow cases, you rely on someone's license, certification, etc. I do that with my dentist. Odds are, he's not a fake. But usually we know that they know because we talk to them. We can also use this dialogic approach with evaluating students. In fact, at the doctoral level, there is a long tradition of this: the viva voce, or oral defense. In my program at Leicester, it was the only mandatory face-to-face event in the entire program. They wanted to hear you speak about your work. Not to "defend" it, but to explain it in a way that was convincing--that it was your work. I prepared for weeks for mine, including nearly a week in Leicester. It lasted less than 30 minutes. They knew.

    If we weren't so busy processing students instead of educating them, if we were willing to invest in good faculty and pay them well, we could use that investment to take the time to know they (the students) know what they know. But the specific approach (dialogic, in this case) isn't the point. Changing the way we evaluate learning to adjust to changing times is.
     
  12. Lerner

    Lerner Well-Known Member



    Google Engineer Blake Lemoine joins Emily Chang to talk about some of the experiments he conducted that lead him to think that LaMDA was a sentient AI, and to explain why he is now on administrative leave. He's on "Bloomberg Technology."
     

Share This Page