https://www3.nhk.or.jp/nhkworld/en/news/20230514_03/ In Japan they had mock trial with AI as a judge. University mock trial features ChatGPT judge
(1) YAY! ChatGPT for Supreme Court Justice. I think you could do worse. And maybe have, on occasion. (2) At first I thought this was "Meet Judge AL." A sidekick for Judge Judy?
IIRC, long before ChatGPT, we experimented with artificial Supreme Court decisions here in Canada, but the effort was cancelled. Time-consuming and expensive. It needed 1,000 years and way too many monkeys and typewriters.
I'll say. Especially since the first infinite batch was busy typing the complete works of Shakespeare. Or so I heard....
I like that the Ai would be totally impartial, but I think most people will just complain about this. I also worry in some ways that because Ai does not yet actually feel that certain especially heinous crimes will go without the sort of punishment they really deserve because Ai will just default to the general requirements for the crime.
The original Jets--patterned after The Flintstones (patterned after The Honeymooners)--last just 1 season (24 episodes) before ABC canceled them. It ran on Sunday nights against stiff competition and no one paid attention. (Typical for ABC back then.) But the reruns were moved to Saturday mornings and it took off, staying on the air for years.
I'd be concerned about the AI device developing serious prejudices in the process. But hey...I'm retired. AI won't be taking my job!
I saw an article yesterday about a Texas A&M--Commerce instructor ran his graduating students' papers by ChatGPT and it told him every one of them was plagiarized. This was not correct. They were denied their diplomas. This was a misuse of the site; it is not prepared to make those determinations. The university has not yet cleared them all. https://www.rollingstone.com/culture/culture-features/texas-am-chatgpt-ai-professor-flunks-students-false-claims-1234736601/
This is a commonly recurring question in the ChatGPT Reddit. People are terrified of losing their degrees now because teachers are running everything through ChatGPT to check for AI. ChatGPT will tell you what it think you want to hear and will lie. These people shouldn't be teaching if they're that bad at evaluating sources.
This. Learning evaluation in higher education in America has always been a hit-or-miss affair, focused too much on cheating and not enough on higher levels of learning (see Bloom's Taxonomy). When evaluation is aimed low, like measuring remembering (recall) or understanding, there is a huge burden to guard against cheating. Why? Because it's easy to cheat when all you're doing is remembering data like facts, dates, etc. So, what do they do? They find ways to metaphorically strip search test-takers. (Ever been to a Prometric site?) This is why distance learning scares the uninitiated; they just can't see how cheating can be prevented. But it can through evaluating higher levels of learning. Even an assignment as ambitious as a term paper rarely gets past understanding. Doctoral theses, especially in professional doctoral degree programs, are aimed primarily at the application level. But when we get students to analyze, to evaluate, or even to create, then it is more likely that (a) they learned and (b) it's them demonstrating actual learning. But this can be hard work. And universities are noted for being lazy when it comes to teaching, treating the classroom as not much more than an apprenticeship. (For both teachers and students.) One of the things about UoP that some people just didn't grasp is how their courses were about 35-40% self-directed learning in team projects. Not only did this build intangible skills like teamwork and communication and self-leadership, it also allowed students to build projects that demonstrated their learning beyond simple recall (like on a test). We were constantly asking ourselves "How do we know that they know?" Of course, UoP's method--like any method--was only as good as the people executing it, and that got a little shaky as the university grew massively in the early 2000s. Still, when I encounter people who've done a UoP degree, I know how hard they worked to earn it, and the levels of learning they achieved.
Teachers are overworked and under-payed often take work home, put hours of their own time, evenings, weekends in to lesson planning and grading. Imagine you have 200 students with 20% who are special needs and have adjusted curriculum, multilevel. They check sources as mach as possible. The departments at schools and colleges are adjusting to the AI, ChatGPT ,etc and redesigning lessons, with all the additional non teaching tasks that the admins lay on teachers? This is a major disruption for them and an adjustment is happening.
https://www.yahoo.com/finance/news/not-law-school-fund-manager-121500002.html The legal services industry is in “big trouble.” At least that’s what fund manager Geoff Lewis believes: "I talk to folks who spend thousands of dollars a week on legal bills, they're already using ChatGPT to generate complex contracts."
That sounds rather unwise given that it makes things up. https://www.forbes.com/sites/mollybohannon/2023/06/08/lawyer-used-chatgpt-in-court-and-cited-fake-cases-a-judge-is-considering-sanctions/?sh=2fa102f47c7f
There's really no good reason not to use AI for routine legal drafting. It's not like lawyers start from scratch there or anywhere else for that matter.
Well, I wouldn't go that far, but I'm pretty sure they don't make up legal citations that they know a judge is going to check.
Agreed. They don't do it themselves. They get an accomplice - ChatGPT - to do it. Lots of stuff like this: https://www.digitaltrends.com/computing/lawyer-says-sorry-for-fake-court-citations-created-by-chatgpt/