Could a computer earn a degree? Would it be wrong?

Discussion in 'Off-Topic Discussions' started by John Bear, Oct 16, 2002.

Loading...
  1. John Bear

    John Bear Senior Member

    So a computer (Deep Fritz) is poised to become the world chess champion

    And the Dalai Lama writes seriously about the possibility of people being reincarnated as computer programs.

    Is it possible that a computer could earn a regionally accredited on-line degree? Would this be a bad thing?
     
  2. Tom Head

    Tom Head New Member

    Very interesting question.

    Is it possible: At the moment, I don't think parsing AI is advanced enough to pull it off; but given another 20 years, maybe, at least from the multiple-choice questions angle. Essays may be trickier.

    Is it wrong: No, as long as the programmers don't take shortcuts--computers are natural plagiarists with photographic memories, and what one would really need is an AI that, as much as possible, does its own work from scratch.

    And: Can a computer get portfolio credit? Shouldn't Richard Dawkins' PowerBook be able to claim at least Biology 101?


    Cheers,
     
  3. BillDayson

    BillDayson New Member

    That sounds like another way of stating the Turing test premise.

    It would be a bad thing if the computer tested its way out of a degree, and found some way to beat the test. I'm thinking of some sort of algorithm that mechanically calculates probabilities on multiple choice exams or something.

    But if the computer could demonstrate real understanding of its subject in the same way that human students must, by composing essays, participating in class discussions (if only by text on a screen), solving difficult problems, and by diplaying insight and imagination, sure.

    The computer would deserve the degree in my opinion.

    Is it possible? Certainly not today. In the future? I don't know, but I'd suspect that the answer is at least theoretically yes.
     
  4. Bill Highsmith

    Bill Highsmith New Member

    Some thoughts:

    a) The computer wouldn't be allowed to sit the exam. Why? Other test-takers cannot use reference works, calculators or computers during the exam, so the computer would not be allowed to use itself (a computer) during the exam.

    b) Besides the illegality of using itself, the computer might also have full-text copies of reference works and may have internal, virtual calculators and spreadsheets stored in its drives. There would be a debate about whether the stored books are incorporated into the AI algorithms or are simply a book hidden away that can be searched by the cheating computer.

    c) You might argue against a) by saying that the AI programs are the test-taking entity and that the computer is just the host; therefore, the computer is not "using itself" in the same sense. However, if the AI algorithms are strong enough to enable a computer to answer essay questions, how then would we know that it was not using itself illegally, such as using a stored reference work internally? Is this "cheating" or is it photographic memory? Would we have to install AI probes into the test taker to see if it is cheating? Would such a probe be a violation of the AI program's "civil rights."

    d) A computer would not have a Social Security Number and therefore would not be allowed to register for the exam. It could not take any meaningful oaths or affirm that it was not cheating unless the computer were considered sentient.

    e) If a computer were allowed to earn a degree by examination, then "BA in 4 milliseconds" would be in order.

    f) I've followed computer chess since the 1970s and was very offended when a computer beat a grandmaster. My offense was at the method that the IBM engineers (and everyone else with strong chess-playing programs) used: a brute force position-evaluation algorithm with pruning. Such programs have an admittedly good algorithm for evaluating the goodness or badness of a particular chess position, and then use sheer computing power to calculate the outcome of every possible move to whatever depth possible until the computer runs out of time. So, the computer becomes a better player simply by increasing its speed. It then became inevitable that a computer would be able to consider moves to a depth of 20 or 30 moves (by both players) after enough years of processor, memory, and special hardware speed improvements. Is this brute force or intelligence?

    Grandmasters do not play chess in this way at all. They may consider the consequences of moves to 10 or fifteen levels, but in a completely different way...by pattern recognition. While the computer will blithely consider millions of completely silly moves, the grandmaster only considers a few, promising threads. By their incredible pattern-recognition capability, the grandmasters nearly instantly discard all the silly moves and their subsequent variations. Some people have tried to model this type of play for computer chess, but with much less success.
     
  5. Rich Douglas

    Rich Douglas Well-Known Member

    IBM developed a computer that could earn a college degree. But when it pledged a frat, it kept shorting out during beer blasts. So sad.

    (It went back home and got a job in dad's hardware store.)
     
  6. Bill Highsmith

    Bill Highsmith New Member

    I guess the best equivalent of self-impairment for computers would be sucking down virus cocktails.

    More computer chess ranting:

    Blindfolded chess: some human players have the remarkable ability to play well without benefit of having the chess board in front of them to remind them of the current state of the game. They have enough visualization and memory to be effective players without a board. I could not think of an equivalent disadvantage to give to a computer chess player that would not completely disable the computer's ability to play.

    Simultaneous chess: some players, because of their aforementioned pattern recognition, can play multiple players (30 or 40) at a time and still play very strong chess. The simultaneous player visits each board, typically spending a few seconds, makes the move and then goes to the next board. I participated in a similtaneous exhibition by then US Champion Lubomir Kavalek. He soundly thrashed all 30 or so players in a couple of hours, some of which had expert ratings. My gut feel is that the human grandmaster's quality of play is not linearly diminished by the number of similtaneous players, but remains in the stratosphere. However, a computer simultaneous chess player *would* be linearly diminished in quality of play since the brute force evaluation time per game would be divided by the number of players.

    Then there are those human players with the ability to play simultaneous blindfolded chess. (This is beyond me to comprehend.) George Koltanowski played 56 games in this manner. http://chesmayn.valuehost.co.uk/Blindfold-Chess.htm
     
  7. BillDayson

    BillDayson New Member

    Part of the problem is the nature of chess, I think. Chess is its own ultra-simplified little world, with only a few actors and a tiny set of rules governing possible moves. So a brute force method of attacking it is feasible.

    I think that we will (in fact we already do) see AI that can behave intelligently in an extremely impoverished artificial universe like chess. In chess, 99.999...% of our daily concerns and challenges simply never arise. But it's a long way from an intelligent process-control device to a mechanical man.

    One of the Dreyfuses has written eloguently against the Turing test, arguing that a computer can never emulate a man. That's because part of being human is having a body, having friends and a social life, possessing emotions and aesthetic sensibilities and so on. If we ask a computer about any of these things, either it won't be able to answer, or else it will lie, trying to impersonate a man. But having had no experience, so the argument goes, the lies will be transparent and easily caught out.

    While I grant all that, I don't think that it really discredits either the Turing test or AI.

    The Turing test isn't a definition of AI, it is simply a sufficient (but not necessary) criterion for AI's existence. If a machine can successfully pretend to be a man, then it qualifies to be called 'intelligent'. But if the machine is honest and doesn't lie, you would never confuse it for a human, but nevertheless it could be intelligent.

    The real problems arise when the Turing test is restricted to impoverished universes of discourse, like chess. The question is no longer whether the machine can imitate a man's communications in every imaginable way, but whether it can imitate a man when communicating in a very restricted language with a tiny field of reference governed by a limited and well defined set of rules.

    That's where AI (of a sort) already exists, I think. And it will probably grow almost imperceptably from there, as computer capabilities are gradually extended to more and more complex worlds.

    So to get back to the thesis of this thread: Do any university subjects show the kind of characteristics that might make them vulnerable to machine attack?

    Literature? Pretty far fetched. It's just too human.

    But perhaps mathematics. Mathematics seems like a much more complex open-ended form of chess, with a restricted language and a defined set of rules. I can imagine computers creating proofs (it's already been done). But so far the methods have been brute force and little mathematical intuition is evident. That would require pattern recognition and the ability to employ analogies, at the very least. But that's certainly coming quickly.

    Bottom line: I can imagine a computer being able to make original contributions to at least some sub-fields of higher mathematics and formal logic.

    So, if a computer can pass math exams and complete a series of problem sets to the level expected of a human student, if it can construct proofs and even produce something new and original now and then, does it deserve a math degree?
     
  8. John Bear

    John Bear Senior Member

    I'm really enjoying reading these responses and comments. Thank you, gentlemen.

    Decades ago, when Marina and I were in the toy and game designing and marketing business, we marketed a game by George Koltanowski, the blindfold champion. I remember trying to talk to him about what he is doing in his mind, while playing 56 blindfold games: is there eidetic imagery, or what? All he would say is, "I cannot talk about that" -- which could either have meant "I will not" or "I can not."

    His game was a charming one. Step Chess. The board was a checkerboard pyramid with either 9 squares on the first level. Players started with all pawns. When a piece moved 'forward' onto a new level, it also became a new piece. P-K4 meant the pawn was now a bishop.
     
  9. Bill Highsmith

    Bill Highsmith New Member

    I think what you said is true. I was initially astounded and impressed when a computer first beat a grandmaster. But the awe has almost vanished because:

    1) what I said before...it was inevitable given the brute force method selected and the annual doubling of computer processing speed, and

    2) I am not sure that we've learned much about AI from the efforts put into computer chess...again due to the brute force method. As you said, chess is a tiny universe and learning to navigate that strange universe does not mean that you can navigate any others. However, if they had delved completely into more human-like processes (pattern recognition, etc.), then they might not have beat a chess grandmaster, yet, but would have a lot more generalized and reusable knowledge of AI.

    I still don't think that a computer deserves a bachelor's degree, because:

    1) its work reflects the work of others (the software developers).

    2) the computer, unless it becomes sentient, is but a tool. A student or scientist can do far more work in a given time with a scientific calculator than without. This does not mean that the calculator should receive the student's degree or the scientist's Nobel Prize. An AI computer is not more than a tool and behaves more like a calculator than a human. I would rethink this if a computer became self-aware and was not just simulating self-awareness, but I'm not holding my breath. (I enjoyed the movie, AI, but was disappointed in that the issues it raised were about that time when computers are self-aware, but little about how it got there. It gave little insight into that well-known step at the center of any large development project with a short schedule: and-then-a-miracle-happens. The movie could not have done this, but I was disappointed nevertheless.

    3) a degree awarded a computer has no value. In the case of a human job candidate, an accredited degree gives the prospective employer the warm-fuzzy feeling that the human can do or learn to do tasks that the employer might find useful. So if the human passed a differential equations course, it is reasonable to expect that he could learn to design certain widgets. However, it is not reasonable to assume that a computer that has been programmed to pass certain academic tests can do anything else that is useful. In other words, the employer is relying on the adaptability, motivation, and loyalty of a human to do vaguely defined tasks in the near future and who-knows-what in the more distant future. By comparison, a computer that only has BS degree-seeking applications will be able to do nothing else but seek degrees.

    4) the computer does not genuinely want one and would not seek one. It would not receive one on its own merits or efforts; if a computer receives a degree, it is only because a surrogate human wanted the computer to have one and did the grunt work on behalf of the computer. I might warm up to the idea that a software application has a "certification" that the tools it provides have passed a regime of standardized tests. For example, if an application designs beams for a building, then the resulting design will meet all building codes somewhere.
     
    Last edited by a moderator: Oct 18, 2002
  10. Tom Head

    Tom Head New Member

    This just in: Deep Fritz failed to defeat world chess champion Vladimir Kramnik, and had to settle for a 4-4 tie. Story here.


    Cheers,
     

Share This Page