1. Post #1
    asdfghjkl;
    Funion's Avatar
    October 2008
    3,116 Posts
    Personally I think creating ai could be possible by combining neuroscience, engineering, physics, and chemistry


    But would it be ethical to make a conscious being to aid the hopes and goals of mankind?

    I think that it would, but only if they were treated like a human being

  2. Post #2
    Gold Member
    Dennab
    August 2011
    326 Posts
    IBM is making neuro-computers, which is even can "think" better than human brain. As I know, all the global financial "things" is controlling by that. Some people even says that, it can calculate future...

    Just for fun, IBM workers, made it "CIVIL" version called WATSON, and it won jeopardy.
    sorry for bad english.

  3. Post #3
    I don't see why it wouldn't be ethical. We create people all the time and then impose rules on them so they are, at the very least, not a threat to society.

  4. Post #4
    Gold Member
    Nikita's Avatar
    April 2005
    1,882 Posts
    Possible? Yes. IBM's Watson is a definite proof.

    Ethical? WHO CARES LET'S DO IT ANYWAY!

  5. Post #5
    Extraction Point
    Empty_Shadow's Avatar
    July 2006
    8,073 Posts
    I don't see any ethical dilemma. It's as much of a dilemma as having a child or animal husbandry.

  6. Post #6
    NATURALLY WIRED TO HAVE SEX WITH KIDS
    Rubs10's Avatar
    June 2007
    8,692 Posts
    I don't see why it wouldn't be ethical. We create people all the time and then impose rules on them so they are, at the very least, not a threat to society.
    I don't see how it's ethical to have children. You have a right to your own body and you're knowingly a partial cause of every pain and discomfort they suffer. I view that sort of procreation as a neutral action.

    If creating intelligence where the former isn't a factor, you're still a partial cause of all of their pain and discomfort.

  7. Post #7
    I don't see how it's ethical to have children. You have a right to your own body and you're knowingly a partial cause of every pain and discomfort they suffer. I view that sort of procreation as a neutral action.

    If creating intelligence where the former isn't a factor, you're still a partial cause of all of their pain and discomfort.
    I am using ethical in the sense of "not unethical."

  8. Post #8
    Gold Member
    Turnips5's Avatar
    January 2007
    7,086 Posts
    I can't think of any ethical arguments against creating human-level AI that don't also apply to creating humans

  9. Post #9

    August 2011
    772 Posts
    Robot buddy!

  10. Post #10
    NATURALLY WIRED TO HAVE SEX WITH KIDS
    Rubs10's Avatar
    June 2007
    8,692 Posts
    I am using ethical in the sense of "not unethical."
    If they don't like their existence the creator is partially at fault.
    You should be licensed to create AI. If you're physically or mentally abusing an AI, or abusing its rights, you shouldn't be allowed to create them.

    Creating an AI comparable to a human and then making its life into hell should be illegal.

  11. Post #11
    Gold Member
    Killer900's Avatar
    April 2005
    6,599 Posts
    IBM is making neuro-computers, which is even can "think" better than human brain. As I know, all the global financial "things" is controlling by that. Some people even says that, it can calculate future...

    Just for fun, IBM workers, made it "CIVIL" version called WATSON, and it won jeopardy.
    sorry for bad english.
    That is both hilarious and scary.

  12. Post #12
    If they don't like their existence the creator is partially at fault.
    Not really, unless the creator could have predicted it.

  13. Post #13
    Dennab
    September 2009
    1,147 Posts
    Yes. It is unethical. If an AI can feel emotions and stuff, What if it goes in a game like Grand Theft Auto?

    You will kill family, there will be funerals, etc etc.

  14. Post #14
    Ye Olde Syphen
    Dustinm16's Avatar
    July 2010
    1,840 Posts
    Yes. It is unethical. If an AI can feel emotions and stuff, What if it goes in a game like Grand Theft Auto?

    You will kill family, there will be funerals, etc etc.
    I wanna say that sounds very intriguing without being insensitive, but for Science, this is a must.

  15. Post #15
    Rad McCool's Avatar
    August 2009
    3,883 Posts
    Possible, yes.
    Ethical, yes.

    And no, neither Watson or any other super computer today can "think" better than a human brain. It's one thing to be able to, through a large database, locate answers. It's completely different to do logical thinking and to be self cognitive. But I'm sure we will get there eventually.

    I think it's interesting to think how we will look in the future. We are already replacing malfunctioning/lost body parts with artificial ones. Glasses and hearing aids are common. So are artificial limbs. Will we ever be able to replace organs with even better artificial organs, and even the brain? At what point will we stop being humans and become "robots"? And what happens then, when we all have become machines? We will just start replacing our "limbs" with wheels and wings and what not, since they work so much better anyway. And eventually our limbs and bodies will become useless, since the whole world is just a massive network of super computers. You don't need to travel anywhere physically, only the brain is needed. So it's just much easier to "upload" your brain into a data bank. And then what, we are all just very very complex programs? And the programs just start working together, creating bigger programs until we all are unified into one enormous static program. What will happen to our consciousness as we gradually travel down this line?

    I like to think this is the "meaning" of life. Big bang happened, spreading every single particle in the universe. Stuff starts to form. Atoms, stars, planets, life. Like atoms have a natural desire to bond and form more complex entities. Life is unavoidable, and it will grow more powerful as time goes by. We will even be able to control time itself. And the point will come where every single particle, all energy, in the universe have merged into one single static system, a singularity. And then a big bang can happen again.

    But that's just me speculating :)

  16. Post #16
    Gold Member
    Dennab
    January 2012
    1,310 Posts
    Cool.

  17. Post #17
    coolsteve's Avatar
    January 2012
    117 Posts
    i like that show BSG cause it goes over this kinda stuff

  18. Post #18
    Gold Member
    Nikita's Avatar
    April 2005
    1,882 Posts
    Possible, yes.
    Ethical, yes.

    And no, neither Watson or any other super computer today can "think" better than a human brain. It's one thing to be able to, through a large database, locate answers. It's completely different to do logical thinking and to be self cognitive. But I'm sure we will get there eventually.

    I think it's interesting to think how we will look in the future. We are already replacing malfunctioning/lost body parts with artificial ones. Glasses and hearing aids are common. So are artificial limbs. Will we ever be able to replace organs with even better artificial organs, and even the brain? At what point will we stop being humans and become "robots"? And what happens then, when we all have become machines? We will just start replacing our "limbs" with wheels and wings and what not, since they work so much better anyway. And eventually our limbs and bodies will become useless, since the whole world is just a massive network of super computers. You don't need to travel anywhere physically, only the brain is needed. So it's just much easier to "upload" your brain into a data bank. And then what, we are all just very very complex programs? And the programs just start working together, creating bigger programs until we all are unified into one enormous static program. What will happen to our consciousness as we gradually travel down this line?

    I like to think this is the "meaning" of life. Big bang happened, spreading every single particle in the universe. Stuff starts to form. Atoms, stars, planets, life. Like atoms have a natural desire to bond and form more complex entities. Life is unavoidable, and it will grow more powerful as time goes by. We will even be able to control time itself. And the point will come where every single particle, all energy, in the universe have merged into one single static system, a singularity. And then a big bang can happen again.

    But that's just me speculating :)
    Watson can logically think. Not as well as human, but think logically non the less.

  19. Post #19
    !TROLLMAIL!'s Avatar
    January 2012
    141 Posts
    The only problem would be hacking and breaks. You make it idiot proof,somebody makes a better idiot. In this case,you make it hacker proof,somebody will make a better hacker.

  20. Post #20
    Gold Member
    Dennab
    August 2005
    12,791 Posts
    Possible? Yes.
    Ethical? Why wouldn't it be?

    Edited:

    The only problem would be hacking and breaks. You make it idiot proof,somebody makes a better idiot. In this case,you make it hacker proof,somebody will make a better hacker.
    What if we were to install some kind of safety to block functions and shutdown in case it got hacked?

  21. Post #21
    junker|154's Avatar
    August 2010
    6,943 Posts
    The aspect that I dislike about arficial intellgence is that humans become more and more obsolete and a lot of people will suffer because their abilities will be replaced by A.I.

    Besides humans become more and more dependant from electronic devices, which is not necessarily bad but can backfire one day.

  22. Post #22
    Gold Member
    Zakkin's Avatar
    August 2009
    5,450 Posts
    I believe that AI can only really be simulated, since making a real AI will probably take lots of years and millions of pounds/dollars to do.

    As for Ethical, I guess christians or some other religeon would probably be like 'God is the only one who is allowed to make life!'

    In fact, I believe that it's the whole religeous 'I have a problem with this so fuck you' thing which comes with anything from stem cells to this has slowed down our progress in creating better technology, technology of which I believe could really improve our future.

  23. Post #23
    Gold Member
    Satane's Avatar
    March 2007
    3,581 Posts
    watson is not real AI. it's just a piece of code following instructions.
    http://en.wikipedia.org/wiki/Blue_Brain_Project
    this is by far the closest we got to simulating real brain. they are trying to simulate it on molecular level.
    In November 2007,[5] the project reported the end of the first phase, delivering a data-driven process for creating, validating, and researching the neocortical column.

    By 2005 the first single cellular model was completed. The first artificial cellular neocortical column of 10,000 cells was built by 2008. By July 2011 a cellular mesocircuit of 100 neocortical columns with a million cells in total was built. A cellular rat brain is planned for 2014 with 100 mesocircuits totalling a hundred million cells. Finally a cellular human brain is predicted possible by 2023 equivalent to 1000 rat brains with a total of a hundred billion cells.[6][7]

    Now that the column is finished, the project is currently busying itself with the publishing of initial results in scientific literature, and pursuing two separate goals:

    construction of a simulation on the molecular level,[1] which is desirable since it allows studying the effects of gene expression;
    simplification of the column simulation to allow for parallel simulation of large numbers of connected columns, with the ultimate goal of simulating a whole neocortex (which in humans consists of about 1 million cortical columns).

  24. Post #24
    Why is everyone here so focused on a so-called human-level AI?

    What the fuck kind of AI researcher wants to focus on something so pointless as a machine that can think like a human? If you want something that can think like a human, get laid and wait 9 months.

    I don't think people really understand the sheer danger of making an AI with a greater-than-human intelligence, which is much more likely to come about.

  25. Post #25
    Gold Member
    Satane's Avatar
    March 2007
    3,581 Posts
    Why is everyone here so focused on a so-called human-level AI?

    What the fuck kind of AI researcher wants to focus on something so pointless as a machine that can think like a human? If you want something that can think like a human, get laid and wait 9 months.

    I don't think people really understand the sheer danger of making an AI with a greater-than-human intelligence, which is much more likely to come about.
    how is having a machine that can learn itself to talk and much much more pointless?

  26. Post #26
    Gold Member
    Dennab
    August 2005
    12,791 Posts
    Why is everyone here so focused on a so-called human-level AI?

    What the fuck kind of AI researcher wants to focus on something so pointless as a machine that can think like a human? If you want something that can think like a human, get laid and wait 9 months.

    I don't think people really understand the sheer danger of making an AI with a greater-than-human intelligence, which is much more likely to come about.
    Terminator?

  27. Post #27
    how is having a machine that can learn itself to talk and much much more pointless?
    Because it's aiming so low.

    It would be as if early computer scientists decided set out to create a machine that could do arithmetic at the same rate as human minds.

    Edited:

    Terminator?
    no

  28. Post #28
    Gold Member
    Satane's Avatar
    March 2007
    3,581 Posts
    Because it's aiming so low.

    It would be as if early computer scientists decided set out to create a machine that could do arithmetic at the same rate as human minds.

    Edited:



    no
    they have to start with something, they're making rat brains now. if they even manage to actually make real human-level AI, the next step is of course multiple human minds. just like they had to start with simple computers... your point doesn't make any sense.

  29. Post #29
    they have to start with something, they're making rat brains now. if they even manage to actually make real human-level AI, the next step is of course multiple human minds. just like they had to start with simple computers... your point doesn't make any sense.
    It only seems obvious because you are a human mind and you can't imagine other possible mind configurations very well.

    The template for the human mind is only a single point in a vast space of possible mind types.

    Humanlike minds will not be created before more advanced general intelligences. AI researchers are interested in the latter while neuroscientists are interested in the former. Guess which are the ones actually building the machines?

  30. Post #30
    Gold Member
    Satane's Avatar
    March 2007
    3,581 Posts
    It only seems obvious because you are a human mind and you can't imagine other possible mind configurations very well.

    The template for the human mind is only a single point in a vast space of possible mind types.
    you're contradicting yourself, but i agree with what you said this time.

  31. Post #31
    Lukasaurus's Avatar
    October 2010
    1,166 Posts
    Do you mean creating an artificial human that for all intents and purposes, acts, behaves, lives, looks etc "human", but inside is robot or whatever. Like, no weird bladerunner sci fi stuff or superhuman strength...

    Is it possible? Right now. No.

    Is it ethical - If it truly were an artificial human, in that it was indistinguishable from a real human in everyway, provided they weren't treated as some kind of second class citizen, then no.

  32. Post #32
    Gold Member
    Rob Markia's Avatar
    January 2007
    477 Posts
    IBM is making neuro-computers, which is even can "think" better than human brain. As I know, all the global financial "things" is controlling by that. Some people even says that, it can calculate future...

    Just for fun, IBM workers, made it "CIVIL" version called WATSON, and it won jeopardy.
    sorry for bad english.
    That was amazing.

  33. Post #33
    Gold Member
    PvtCupcakes's Avatar
    May 2008
    10,900 Posts
    Not possible within Computer Science.
    If AI will ever happen, it won't be a computer.

  34. Post #34
    crackberry's Avatar
    July 2009
    2,424 Posts
    In the case of Watson, I think that the extent of what that super computer could do is find facts and put them forward. I don't think there is really any truth behind computers becoming self aware except in science fiction literature and movies.

  35. Post #35
    Gold Member
    TamTamJam's Avatar
    December 2008
    5,279 Posts
    The only case where I can see there being a ethical problem with AI is if it was gave emotions, and it expressed that it was suffering or something other. Then the topic of what makes our emotions and feelings different from that of a computer that can replicate them near perfectly comes up.

  36. Post #36
    Dennab
    December 2011
    5,623 Posts
    how is having a machine that can learn itself to talk and much much more pointless?
    All that would be is proof of concept, one of the main draws of a strong AI is that it can create a better version of itself which can make a better version of itself, it becomes super adaptive, self aware and the danger comes from our interaction with it, it would at some point come to the conclusion it's superior to us and it would be pretty quickly, so we essentially leave our fate in it's hands as it decides whether or not we are a threat to it and if it should share resources with us or whether it should just claim everything for itself.

    A guy done a book about the whole idea called "The Artilect War."

    Edited:

    Not possible within Computer Science.
    If AI will ever happen, it won't be a computer.
    Why is it impossible? All our brain is is a powerful and evolved carbon based computer with a series of inputs and specialised regions, exactly like a computer is. The only real difference is that we are carbon based and highly refined thanks to a few hundred thousand years of evolutionary pressures.

    And if the AI isn't computer based then it'll be alive and thus just intelligence and not artificial intelligence.

  37. Post #37
    Not possible within Computer Science.
    If AI will ever happen, it won't be a computer.
    uh

    what else could it possibly be?

    Edited:

    All that would be is proof of concept, one of the main draws of a strong AI is that it can create a better version of itself which can make a better version of itself, it becomes super adaptive, self aware and the danger comes from our interaction with it, it would at some point come to the conclusion it's superior to us and it would be pretty quickly, so we essentially leave our fate in it's hands as it decides whether or not we are a threat to it and if it should share resources with us or whether it should just claim everything for itself.

    A guy done a book about the whole idea called "The Artilect War."
    the AI deciding that we are inferior isn't necessary for it to be a threat. a far more likely scenario is that it doesn't even bother to care about us in the first place.

    A programmer has constructed an artificial intelligence based on an architecture similar to Marcus Hutter's AIXI model (see below for a few details). This AI will maximize the reward given by a utility function the programmer has given it. Just as a test, he connects it to a 3D printer and sets the utility function to give reward proportional to the number of manufactured paper-clips.

    At first nothing seems to happen: the AI zooms through various possibilities. It notices that smarter systems generally can make more paper-clips, so making itself smarter will likely increase the number of paper-clips that will eventually be made. It does so. It considers how it can make paper-clips using the 3D printer, estimating the number of possible paper-clips. It notes that if it could get more raw materials it could make more paper-clips. It hence figures out a plan to manufacture devices that will make it much smarter, prevent interference with its plan, and will turn all of Earth (and later the universe) into paper-clips. It does so.

    Only paper-clips remain.

  38. Post #38
    Gold Member
    Jasun's Avatar
    June 2009
    3,474 Posts
    If we're ever able to make computers 'concious' of their actions, then they should have the same rights as humans.

  39. Post #39
    Not possible within Computer Science.
    If AI will ever happen, it won't be a computer.
    Can you justify this post further

  40. Post #40
    Gold Member
    SystemGS's Avatar
    June 2007
    2,854 Posts
    To say we'll never be able to create an architecture paralleled (or greater) to the human brain is a severe understatement. Within the next twenty to forty years, guaranteed, we'll be able to match and surpass the capacity of our own minds with machines.

    It's certainly possible, but ethics are a different topic. I suppose if you treated the AI like a normal person, it's completely ethical. You're essentially creating a new person, so the same basic rights should (in theory) apply to them. However, we'll more than likely adopt the view of machine inferiority because they're not innately human.