>>>  Laatst gewijzigd: 16 januari 2022   >>>  Naar www.emo-level-8.nl  
Ik

Notities bij boeken

Start Filosofie Kennis Normatieve rationaliteit Waarden in de praktijk Mens en samenleving Techniek

Notities

Wat mij betreft kunnen machines geen morele afwegingen maken, dus ik ben benieuwd naar wat de auteurs hierover te berde gaan brengen.

Maar ik ben niet erg onder de indruk. Het gaat over 'autonome robots'. Maar robots bestaan nooit onafhankelijk van menselijk toezicht. Dat is een illusie. Het antropomorfe taalgebruik loopt ook met de auteurs weg. Robots gedragen zich bijvoorbeeld niet, ze doen dingen / voeren dingen uit. Ze dragen ook geen verantwoordelijkheid voor wat ze wel of niet doen. Mensen bepalen wat ze doen of niet doen en die dragen de verantwoordelijkheid daarvoor. Daarom zijn er geen ethische regels nodig voor de robots, maar voor de mensen die hun doen en laten bepalen.

En dan het idee 'artificial moral agents' (AMAs) waar het hele boek over gaat. Nee, (ro)bots kunnen niet 'autonoom' morele beslissingen nemen, maar de auteurs draaien zich in allerlei bochten om zoiets in ieder geval te onderzoeken. Allerlei kritiek daarop wordt zorgeloos weggewuifd. Maar het is principieel onmogelijk en onwenselijk. Zie mijn commentaar overal.

De auteurs gaan ook weer eens uit van een kritiekloos optimisme over de mogelijkheden van machines. Ze baseren zich daarbij op een technologisch en economisch determinisme, alsof er geen mensen zijn die allerlei beslissingen nemen over wat er gebeurt, bijvoorbeeld over of er al of niet 'driverless systems' gemaakt en ingevoerd worden. Als mensen met zo'n bekrompen normatieve visie machines gaan maken die 'autonoom morele beslissingen nemen', dan kan het alleen maar fout gaan.

Voorkant Wallach-Allen 'Moral machines - Teaching robots right from wrong' Wendell WALLACH-Colin ALLEN
Moral machines - Teaching robots right from wrong
Oxford: Oxford University Press, 2009; 275 blzn.
ISBN-13: 978 01 9973 7970

[Het boek begint met wat ik zo haat: met een eindeloze reeks dankbetuigingen en met een ode aan echtgenotes.]

(3) Introduction

Korte bechrijving van de toenemende aanwezigheid van robots en software agents in het samenleven van mensen.

"All of these developments are converging on the creation of (ro)bots whose independence from direct human oversight, and whose potential impact on human well-being, are the stuff of science fiction. Isaac Asimov, more than fifty years ago, foresaw the need for ethical rules to guide the behavior of robots." [mijn nadruk] (3)

[Mijn eerste reactie. Robots bestaan nooit onafhankelijk van menselijk toezicht. Dat is een illusie. Robots gedragen zich ook niet, ze doen dingen / voeren dingen uit. Ze dragen geen verantwoordelijkheid voor wat ze wel of niet doen. Mensen bepalen wat ze doen of niet doen en die dragen de verantwoordelijkheid daarvoor. Daarom zijn er geen ethische regels nodig voor de robots, maar voor de mensen die hun doen en laten bepalen.]

"A concern for safety and societal benefits has always been at the forefront of engineering. But today’s systems are approaching a level of complexity that, we argue, requires the systems themselves to make moral decisions — to be programmed with “ethical subroutines,” to borrow a phrase from Star Trek. This will expand the circle of moral agents beyond humans to artificially intelligent systems, which we will call artificial moral agents (AMA's)."(4)

[Let op het taalgebruik: 'engineering' en niet 'engineers'; 'systems are approaching a level of complexity' alsof ze dat helemaal uit zichzelf doen. Mensen worden hier niet centraal gesteld, maar machines / intelligent systems. Dus omdat mensen niet in de hand kunnen houden wat machines doen moeten die machines zichzelf in de hand leren houden. Wonderlijk. En wie programmeert die 'ethische subroutines'? O ja, mensen. Het zijn mensen die computers en software inzetten in de voorbeelden die volgen - in de effectenhandel, in het elektriciteitsnet - en die allerlei scenarios niet gezien hebben waardoor die computers en die software fouten maken die tot een hoop ellende kunnen leiden.]

"The field of machine morality extends the field of computer ethics beyond concern for what people do with their computers to questions about what the machines do by themselves. (...) We are discussing the technological issues involved in making computers themselves into explicit moral reasoners. As artificial intelligence (AI) expands the scope of autonomous agents, the challenge of how to design these agents so that they honor the broader set of values and laws humans demand of human moral agents becomes increasingly urgent."(6)

[Machines doen niets uit zichzelf. Er zijn geen 'autonomous agents'. We moeten kijken naar de waarden en normen van de mensen die machines ontwerpen en naar hun normatieve context, niet naar de waarden en normen van machines zelf.]

"However, it is not often possible to predict accurately the impact of a new technology on society until well after it has been widely adopted. Some critics think, therefore, that humans should err on the side of caution and relinquish the development of potentially dangerous technologies. We believe, however, that market and political forces will prevail and will demand the benefits that these technologies can provide. Thus, it is incumbent on anyone with a stake in this technology to address head-on the task of implementing moral decision making in computers, robots, and virtual “bots” within computer networks." [mijn nadruk] (6-7)

[De keuze van de auteurs is dus al normatief: de markt en de politieke krachten zullen de voordelen opeisen die die technieken opleveren, ook al zijn die technieken gevaarlijk. Wat is 'de markt'? De kapitalistische economie waarschijnlijk. Wat zijn 'politieke krachten'? Rijke Republikeinen waarschijnlijk. Als er zoveel mis kan gaan lijkt me de genoemde 'caution' een voor de hand liggender keuze en de controle over de mensen die technieken ontwikkelen essentieel. Niet iets wat je zomaar aan bedrijven kunt overlaten.]

"Humans have always adapted to their technological products, and the benefits to people of having autonomous machines around them will most likely outweigh the costs."(7)

[Ook dat is normatief: mensen moeten zich maar aanpassen aan de techniek, technieken zijn belangrijker dan mensen, ook al weten we helemaal niet of de voordelen van die technieken opwegen tegen de nadelen. Waarom niet andersom denken: de techniek aanpassen aan mensen en alleen technieken ontwikkelen waarvan vrij zeker is dat mensen er werkelijk wat aan hebben? En ik bedoel dan niet de managers en aandeelhouders.]

"Three questions emerge naturally from the discussion so far. Does the world need AMAs? Do people want computers making moral decisions? And if people believe that computers making moral decisions are necessary or inevitable, how should engineers and philosophers proceed to design AMAs?"(9)

[De antwoorden: nee, nee, niet van toepassing. Let ook weer op het taalgebruik: "geloven dat computers die morele beslissingen nemen noodzakelijk of onvermijdelijk zijn". Dus je gelooft alleen maar dat dat zo is, je weet niets zeker en toch beweer je dat het noodzakelijk en onvermijdelijk is dat computers morele beslissingen nemen. Erg kritisch klinkt dat bepaald niet.]

De auteurs geven hierna een inhoudelijk overzicht per hoofdstuk.

(13) Chapter 1 - Why machine morality?

De bekende trolley-dilemma's van filosofe Philippa Foot worden besproken.

[Waarbij het uitgangspunt al is dat het gaat om een 'runaway trolley'. Ik zou liever willen nadenken over hoe voorkomen kan worden dat dat mogelijk is. Maar goed. Morele dilemma's kunnen veel inzichten geven in morele beslissingen.]

"Driverless systems put machines in the position of making split-second decisions that could have life or death implications. As the complexity of the rail network increases, the likelihood of dilemmas that are similar to the basic trolley case also goes up. How, for example, should automated systems compute where to steer a train that is out of control?"(14)

[Geen 'driverless systems' maken als we er niet zeker van kunnen zijn dat ze niet 'out of control' raken? Waarom willen we dat soort systemen eigenlijk hebben? Is dat weer een kwestie van geld? Het idee 'stoppen als het misgaat' - een voor de hand liggende insteek - wordt in het vervolg afgewezen. Niks doen is geen optie blijkbaar.]

" ... a designer of AMAs could not simply choose inaction as a substitute for good action."(16)

"Autonomous systems are coming whether people like it or not. Will they be ethical? Will they be good? What do we mean by “good” in this context? It is not just a matter of being instrumentally good — good relative to a specific purpose. When we talk about good in this sense, we enter the domain of ethics."(16)

[De auteurs laten ook hier weer zien dat ze uitgaan van een technologisch determinisme, alsof er geen mensen zijn die allerlei beslissingen nemen over wat er gebeurt, bijvoorbeeld over of er al of niet 'driverless systems' gemaakt en ingevoerd worden.]

"Moral agents monitor and regulate their behavior in light of the harms their actions may cause or the duties they may neglect. Humans should expect nothing less of AMAs. A good moral agent is one that can detect the possibility of harm or neglect of duty, and can take steps to avoid or minimize such undesirable outcomes. There are two routes to accomplishing this: First, the programmer may be able to anticipate the possible courses of action and provide rules that lead to the desired outcome in the range of circumstances in which the AMA is to be deployed. Alternatively, the programmer might build a more open-ended system that gathers information, attempts to predict the consequences of its actions, and customizes a response to the challenge. Such a system may even have the potential to surprise its programmers with apparently novel or creative solutions to ethical challenges.
Perhaps even the most sophisticated AMAs will never really be moral agents in the same sense that human beings are moral agents. But wherever one comes down on the question of whether a machine can be genuinely ethical (or even genuinely autonomous), an engineering challenge remains: how to get artificial agents to act as if they are moral agents." [mijn nadruk] (16-17)

[Waaruit dus juist blijkt dat het allemaal afhangt van de mensen die een AMA produceren. Van die mensen mag je dus een morele intelligentie verwachten, maar helaas ontbreekt het daar vaak aan. Een ander ding: het voorbeeld van de geblokkeerde creditcard. De auteurs schrijven:]

"This incident was one in which an essentially autonomous computer initiated actions that were potentially helpful or harmful to humans."(17)

[Dat is geen autonome computer. Dat is een computer die door mensen ingegeven regels volgt en daarbij ook belachelijke regels volgt omdat de mensen op de achtergrond allerlei uitgangspunten hebben die mensen niet helpen zoals de angst voor het risico om geld te verliezen. De auteurs zeggen dat hierna zelf. Waarom dan die term gebruiken? Die computer is een precieze afbeelding van de mensen die hem inzetten, niets meer dan dat. Dat is geen autonomie. Autonomie zou bijvoorbeeld zijn dat de computer zomaar ineens de beslissing neemt om voortaan af te wijken van voorgegeven regels en besluit elke maand het salaris te verdubbelen van een groep mensen met een achternaam die begint met de letter G.]

"The widespread use of autonomous systems makes it urgent to ask which values they can and should promote. Human safety and well-being represent core values about which there is widespread agreement. The relatively young field of computer ethics has also focused on specific issues ..."(19)

"The possibility of a human disaster arising from the use of (ro)bots capable of lethal force is obvious, and humans can all hope that the designers of such systems build in adequate safeguards. However, as (ro)botic systems becoming increasingly embedded in nearly every facet of society, from finance to communications to public safety, the real potential for harm is most likely to emerge from an unanticipated combination of events." [mijn nadruk] (21)

[Let wel: een combinatie van gebeurtenissen die niet door mensen is voorzien en dus ook niet in de machines terecht is gekomen.]

"Systems that are blind to the relevant values that should guide decisions in uncertain conditions are a recipe for disaster."(22)

[Precies en dat zou ons voorzichtig moeten maken, maar de auteurs wezen dat eerdere al af. Mogelijkheden die de revue passeren: 'faulty components', 'bad design', het ontbreken van 'adequate safeguards', 'software bugs', het te snel op de markt brengen van systemen terfwijl ze nog niet voldoende getest zijn etc. etc.]

"Corporate executives are often concerned that ethical constraints will increase costs and hinder production. Public perception of new technologies can be hampered by undue fears regarding their risks. However, the capacity for moral decision making will allow AMAs to be deployed in contexts that might otherwise be considered too risky, open up applications, and lower the dangers posed by these technologies."(21-22)

[Dan heb je wel heel veel vertrouwen in AMA's ... En het doel is blijkbaar dat de economie moet draaien zoals hij nu doet.]

(25) Chapter 2 - Engineering Morality

"In this chapter, we will provide a framework for understanding the pathways from current technology to sophisticated AMAs. Our framework has two dimensions: autonomy and sensitivity to values."(25)

Bespreking van Kismet.

"Designers of autonomous systems who choose to ignore the broader consequences of the decisions made by their machines are implicitly embedding a particular set of values into these systems. Whether autonomous systems should consider only those factors relevant to the profitability of the corporations that employ them, and the contractual arrangements that exist between the corporations and their customers, is itself a question about ethics." [mijn nadruk] (31)

[Nee, echt? Maar in het vervolg blijkt steeds weer hoe weinig vragen er worden gesteld bij het economische systeem waarbinnen die AMA's moeten functioneren. Dat vind ik vreemd, omdat de risico's vaak ontstaan door de haast om geld te verdienen, bijvoorbeeld voordat je concurrent dat doet. Dat betekent dat de kwaliteit van de AMA's ondergeschikt gemaakt wordt aan de meestal kwantitatieve keuzes die het management van een bedrijf wil maken. En dan weten we wat er gebeurt ...]

"Because of operator errors and the inability of humans to monitor the entire state of system software, the pressures for increased automation will continue to mount."(32)

[Dat betekent dus: we laten het liever over aan machines dan aan mensen. Dat zegt een heleboel over je mensvisie en je waarden en normen. Je zou ook van alles kunnen ondernemen om te maken dat mensen minder of geen fouten maken, maar nee, we hebben geen vertrouwen in mensen, we hebben meer vertrouwen in machines.]

"These systems will need to handle, at the speed of modern computing, complex situations in which choices are made and courses of action are taken that could not have been foreseen by the designers and software programmers." [mijn nadruk] (32)

[Hoe kunnen we dan ooit weten of die systemen hun werk goed doen? We hebben immers geen inzicht in de gemaakte keuzes en het waarom ervan. Wie is er dan verantwoordelijk voor de gevolgen van die keuzes? Kan zo'n systeem verantwoording afleggen zoals een mens verantwoording kan afleggen? Hoe dan?]

"In our approach to artificial morality, we take progress along the dimension of autonomy for granted — it is happening and will continue to happen. The challenge for the discipline of artificial morality is how to move in the direction specified by the other axis: sensitivity to moral considerations." [mijn nadruk]

"Real engineering challenges are best pursued with clear criteria for success. How might one develop criteria for moral sensitivity or moral agency?" [mijn nadruk] (35)

[Weer dat domme technologische en economische determinisme. Allerlei AI-mensen hebben wat betreft die morele gevoeligheid bezwaren geopperd, zo wordt hier uitgelegd. En dan aankomen met de Turingtest, dat schept weinig vertrouwen bij mij.]

"Before getting into the details of how to build and assess AMAs, however, we will address two kinds of worries we frequently encounter when pre- senting this work. What will be the human consequences of attempting to mechanize moral decision making? And is the attempt to turn machines into intelligent agents as misdirected as the alchemists’ quest to turn lead into gold?" [mijn nadruk] (36)

(37) Chapter 3 - Does humanity want computers making moral decisions?

"There is something paradoxical in the idea that one could relieve the anxiety created by sophisticated technology with even more sophisticated technology. A tension exists between the fascination with technology and the anxiety it provokes. We think this anxiety has two sources. On the one hand are all the usual futurist fears about technology on a trajectory beyond human control. On the other hand, we sense deeper, more personal worries about what this technology might reveal about human beings themselves."(37)

"Philosophy of technology raises questions about human freedom and dig- nity in an increasingly technological society. Is it true that many more people in highly industrialized societies are relegated to repetitive and stultifying jobs? Does the demand for the products of technology inevitably decimate the environment? The development of some new technologies, for example genetic engineering and nanotechnology, raises the fear that powerful pro- cesses are being unleashed that humans might not be able to control. Many of the same worries arise in connection with (ro)bots. Much, perhaps most, of what is written in this genre is not engaged with solving technological prob- lems themselves and, indeed, is critical of the idea of technological “progress.” These philosophers of technology often see themselves as providing a necessary counterweight to technology optimists ... (...) While old-style philosophy of technology was mostly reactive, and often motivated by the specter of powerful processes beyond people’s control, a new generation of philosophers of technology are more proactive. They seek to make engineers aware of the values they bring to any design process and wish to influence the design and implementation of technology rather than merely react to it ... "(38-39)

Dat wordt 'engineering activism' genoemd.

"Nissenbaum justifies engineering activism as the need to “advocate on behalf of values” that serve humanity. The field of artificial morality shares this activist approach to technology.(...) Helping engineers become aware of both the internal and external ethical dimensions of their work has been an important achievement of philosophers such as Nissenbaum."(39)

"The goal of artificial morality moves engineering activism beyond emphasizing the role of designers’ values in shaping the operational morality of systems to providing the systems themselves with the capacity for explicit moral reasoning and decision making."(39)

[Om AMA's te ontwerpen moeten de makers zich verdiepen in waarden van zichzelf en in de doelen die ze stellen. Omdat het productieproces zo complex is moeten we de AMA's zelf uitrusten met het vermogen om moreel te redeneren en beslissen. Maar dat leidt tot een soort van cirkelredenering. Want om dat soort AMA's te maken moeten we dus weer weten welke waarden we hebben en in het maken van AMA's stoppen. Of moeten die AMA's zichzelf ontwerpen en produceren zonder dat er een mens aan te pas komt? Dat is pas autonoom ... ]

"Friedman and Kahn suggest that DSTs [Decision Support Tools - GdG] start a slippery slope toward the abandonment of moral responsibility by human decision makers. As people come to trust the advice of a DST, it can become more difficult to question that advice. There is a danger, they believe, that DSTs could eventually come to control the decision-making process."(40)

[Ik kan me helemaal voorstellen dat dat in de praktijk zo werkt, ja. Juist omdat allerlei mensen - en misschien wel met name managers - techniek idealiseren en zien als de oplossing voor alle kwalen.]

"There is a fine line between parlor tricks and duping the public. However, there is also considerable evidence that providing technology with human-like skills can facilitate interaction between humans and their computers, gadgets, and robots." [mijn nadruk] (43)

[Geen bronvermelding hier. En dat is typisch. Wie hebben die onderzoeken gedaan? De bedrijven die die spullen maken? Of is er onafhankelijk onderzoek? ]

En dan weer over Kismet:

"Perhaps the best known social robot is Kismet. No one would claim that Kismet has a sophisticated social aptitude. What is remarkable is that even with its very low-level, essentially mechanical social mechanisms, it could be quite persuasive in conveying the sense that it was alive and actually engaged in a form of social interaction. (Some students in our classes feel bad when they see Kismet being scolded.)" [mijn nadruk] (44)

[De meest overschatte robot aller tijden. Hoe kan iemand zelfs maar een koppeling maken tussen Kismet en "levend zijn" of "interacteren"? Ja, maar we willen dat zo graag. Ook hier is de vraag wie dat nu precies onderzocht hebben.]

"Certainly, robots like Kismet, with some human-like features and movements, can make interacting with technology easier and more comfortable. But there is also considerable uncertainty as to how human-like technology can or should be. Masahiro Mori, a Japanese roboticist, theorized in 1970 that people become more comfortable with and empathetic with robots with human-like features and movements until they start to look too human, and then people tend to become very uncomfortable with or even revolted by them. The dissonance created by what appears to be human but fails to meet human expectations is apparently quite disconcerting." [mijn nadruk] (44)

"Most people consider robotic dolls a very poor substitute for human companionship, and would regard the attachments formed by the home’s residents as a symptom of society’s failure to attend to the emotional needs of the elderly and the disabled. One might well abhor any suggestion that social robots are a solution to loneliness and need for human interaction. But there are hard questions that need to be asked about the function, practicality, and desirability of social robotics as a response to human needs. For example, if there is no evidence that people and communities are willing to direct the time or resources necessary to respond to the needs of the elderly and disabled for human contact, are social robots better than nothing?" [mijn nadruk] (45)

[Ik ben iemand die het eerste vindt. En die laatste vraag is belachelijk. Dus: het is beter om robots in te zetten wanneer mensen de verantwoorderlijkheid opgeven om voor andere mensen te zorgen? Die denkrichting is helemaal verkeerd. Mij lijkt dat de aandacht dan maar eens uit moet gaan naar waarom dat laatste zo is en hoe we dat kunnen veranderen.]

"Might accepting robots into people’s lives dilute cherished human values and degrade people’s humanity? Ironically, this question has been raised by one of the most successful roboticists, Ronald Arkin, the director of the Georgia Institute of Technology’s Mobile Robot Laboratory. Arkin coined the phrase “Bombs, Bonding, and Bondage” to capture the social concerns posed by the three main forms of human-robot interaction — robots as soldiers, as companions, and as slaves. Robots for military applications, intimacy, and labor will each be very different entities with different goals giving rise to different ethical considerations." [mijn nadruk] (47)

"In the United States, where robotic research is largely financed by the Department of Defense, there are plans to spend billions on the long-term goal of developing armed robots."(47)

[Ik denk niet dat het DoD daar miljarden stopt in de ontwikkeling van seksrobots :-)]

"To our knowledge, fully autonomous gun-carrying or bomb-carrying systems have not yet been let loose. But the rationale for such systems is simple and compelling — robots decrease the need for humans in combat and therefore save the lives of soldiers, sailors, and pilots."(47)

[Opnieuw de verkeerde denkrichting. Misschien is het slimmer om werk te maken van het vermijden van oorlog? Het is opvallend in zo'n boek als dit dat al dat soort zaken als vanzelfsprekend aanvaard worden: 'zo is het nu eenmaal' - denken. Geen kritische vragen bij dat soort normatieve uitgangspunten. En dat wil 'moral agents' bouwen?]

"Given the difficulty of ensuring safety and ethical behavior, it is necessary to think long and hard about when to deploy weapon-carrying systems. The answer is unlikely to be as straightforward as “never.”"(48)

[Nee, niet als de militairen of misschien beter de mensen binnen het militair-industrieel complex het voor het zeggen hebben natuurlijk.]

"Designers of sex toys are particularly good at taking the lead in appropriating the latest technology to titillate their clients. Technological development has a long history of being driven by pornographic applications, and the field of robotics is no exception. As with all pornographic applications, serious issues about exploitation of women and fostering of antisocial behavior arise. But as with the discussion of robot soldiers carrying weapons, there are two sides to this issue. For example, robot avatars functioning as surrogate sex partners for John’s remote pleasures arguably provide a form of “safe sex.” But no doubt there will also be anecdotal evidence suggesting that relationships with robotic sex toys leads to aberrant antisocial behavior, and future research may confirm this." [mijn nadruk] (48-49)

[Een seksrobot is dus per definitie een 'pornografische toepassing'? Vrouwen mogen niet geëxploiteerd worden? Mannen wel dan? Wat voor asociaal gedrag? Seks willen? Een sekspartner voor John? Niet voor Mary dan? Je merkt aan zo'n simpel stukje tekst meteen dat de auteurs normatief oppervlakkige mannen zijn: de ondoordachte waarden en normen vliegen je om de oren .... ]

"A long-standing attraction of robots has been the prospect of having servants or slaves that work 24/7 and don’t need to be paid — getting the benefit of having slaves without taking on the moral challenges of slavery."(49)

[Misleidend taalgebruik: machines zijn geen mensen en zijn dus ook geen slaven, maar apparaten die dingen voor mensen doen.]

"Assessing the impact of new technologies is far from a science. Risk assessment reports on the safety of drugs, building projects, and complex technologies are filled with data about numerous factors. Eventually, someone has to interpret the relative import of each factor, and quantifiable research gives way to value judgments. All too often, the empirical data is used to mask the fact that some group’s economic or political interests have weighed heavily in the final evaluation of risks." [mijn nadruk] (50)

[Zo is het. Maar doe daar dan wat mee en kijk eens wat kritischer naar je eigen standpunten.]

"That is, the analysis of risks could potentially help the AMA select the best course of action on the basis of the available information. Some of the specialized tools and techniques professionals use for assessing risks have already been computerized. These programs might even provide a software platform for AMAs to analyze the consequences of their own actions." [mijn nadruk] (51)

[Weer een cirkelredenering. We moeten als mensen de risico's aan het maken en inzetten van AMA's inschatten en laten dat dan over aan de AMA's? Ineens is het inschatten van risico's weer uit handen gehaald van mensen. Typisch.]

"The social issues we have raised highlight concerns that will arise in the development of AI, but it would be hard to argue that any of these concerns leads to the conclusion that humans should stop building AI systems that make decisions or display autonomy. Nor is it clear what arguments or evidence would support such a conclusion." [mijn nadruk] (52)

"If people had known how destructive automobiles would be a hundred years ago, would they have stopped the development of a favored form of transportation? Probably not. Most people believe the advantages of automobiles outweigh their destructive potential."(53)

[Zie mijn eerdere opmerkingen over het deterministische denken van de auteurs. Dit soort opmerkingen zijn ontzettend oppervlakkig. Wat zegt dat nou, dat de 'meeste mensen geloven dat etc.'? Hoe zouden de opvattingen van die mensen los kunnen staan van een samenleving die helemaal gebaseerd is op auto's?]

(55) Chapter 4 - Can (ro)bots really be moral?

[Nee.]

"Many people believe that machines are incapable of being truly conscious, incapable of the genuine understanding and emotions that define humans’ most important relationships and shape humans’ ethical norms. What are these capacities? (The “ontological” question.) What can be known about them scientifically? (The “epistemological” question.) Does artificial morality depend on answering these questions? (The practical question.) Our answer to the ontological and epistemological questions is an emphatic “We don’t know!” (But neither does anyone else.) The reasons no one knows the answers to the first two questions help shape our approach to the practical question, giving us the confidence to answer it with a resounding no." [mijn nadruk] (55)

[Natuurlijk weten we wél welke vaardigheden nodig zijn voor een ethisch handelen (zoals empathie, medeleven, en zo verder). Er is zelfs wel wetenschappelijke kennis over (over de ontwikkeling van het geweten bijvoorbeeld). Maar de auteurs willen niet stilstaan bij dat soort zaken. Ze lijken te verwachten dat er een mathematische formule voor ethisch handelen is en als die er niet is dat we er niet over na kunnen denken. Het zegt alles over de plank voor hun kop.]

"Isn’t it just a conceptual confusion to use “moral” and “ethical” to describe the behavior of (ro)bots?"(56)

[Het antwoord is: ja, het is een verkeerd antropomorf taalgebruik waarbij je concepten voor mensen gebruikt voor machines.]

"In fact, we think that pressing ahead on the practical task of building AMAs will contribute to better understanding of the ontological and epistemological questions about the nature of ethics itself."(56)

[Daar gaan we weer, we draaien het om: we bouwen zonder iets te begrijpen AMA's en van die AMA's moeten we dan vervolgens leren om onszelf te begrijpen. Het is de wereld op zijn kop. De uitgangspunten van de auteurs zijn ondoordacht. Hierna wordt intelligentie bijvoorbeeld gelijkgesteld met 'computation'. Het is het 'sterke AI' - standpunt waar Searle tegenin ging.]

"Searle argues that executing a program is insufficient for genuine understanding. This argument has given rise to a long debate and countless articles regarding whether a computer system can genuinely “understand anything.” Searle believes that the point made by his Chinese Room argument is common sense, and he expresses surprise that it isn’t more widely recognized by computer scientists." [mijn nadruk] (57)

"... philosophers and others who accept Searle’s argument have taken it to show that programming a computer is a hopeless approach to the development of genuinely intelligent systems. We frequently encounter skeptics who claim that Searle’s result shows that our own approach to artificial morality is similarly hopeless. We disagree. That is, we disagree that the philosophical objections should stop us from continuing to advocate for better computational solutions to ethical decision making."(58)

[Typisch.]

"Like free will, human understanding and consciousness hold a mystical fascination for many. And like all attempts at demystifying the human mind, the claim that digital systems can possess genuine understanding or real consciousness evokes strong negative responses."(63)

[De auteurs vinden dat blijkbaar onzin. Hun standpunt is dat een AMA geen bewustzijn of vrije wil nodig heeft.]

"Real cognitive systems are physically embodied and situated in the world of physical objects and social agents. The words, concepts, and symbols used by such systems are grounded in their interactions with objects and other agents."(64)

[Precies, maar wat voor conclusie trekken de auteurs hieruit? Ze weten het nog niet:]

"For the purposes of designing AMAs, much more needs to be understood about the relationship between embodied cognition and the construction of internal virtual or imagined models of the world. We recognize that it’s a long way from insect-like behavior to higher cognition, including ethical decision making."(65)

[A very long way, indeed ... ]

"Others, for example David Chalmers and Colin McGinn, argue that while discovering correlations between consciousness and the brain is a scientifically valuable activity, it cannot provide an explanation of the phenomenological aspects of conscious experience. This is either because there is not one available in principle, as Chalmers thinks, or because, just as dogs have cognitive limitations that make it impossible for them to understand calculus, people have cognitive limitations that make it impossible for them to understand how their brains produce consciousness, as McGinn believes." [mijn nadruk] (67)

"Perhaps the important properties of consciousness are best understood functionally, too. Even if computers won’t be conscious in exactly the same way as humans, perhaps they can be designed to function as if they have the relevant similar capacities. Machine consciousness is developing as a subspecialty within AI. (...) Stan Franklin, designer of a computer system named IDA, which he argues has attributes of being conscious, proposes that an artificial agent is functionally conscious if its architecture and mechanism allow it to do many of the same tasks that human consciousness enables humans to do. (...) Roboticists working on machine consciousness, for example Owen Holland and Murray Shanahan, recognize that building a system whose consciousness is comparable to that of humans is a long way off. Nevertheless, they certainly believe that robots that are both functionally and phenomenally conscious will eventually be successfully developed."(67-68)

"Armchair arguments that there is a glass ceiling for (ro)bot intelligence are not entirely worthless; they might even turn out to have a correct conclusion. However, that can’t be judged from the present. In the meantime, these arguments help focus attention on what is and is not important. Most of the experienced roboticists we have talked to do not think that there is a glass ceiling. This is unsurprising, of course, since pessimists tend to get weeded out of the profession. However, we predict that in the near term, (ro)bots will continue to converge toward human capacities while also showing considerable cognitive deficits. Nevertheless, as we will illustrate later, the present state of AI, artificial life, and robotics is sufficient for the initiation of some interesting experiments in the design of AMAs, and for additional experiments just around the corner."(68-69)

[Hoe vaak is er in de geschiedenis van de AI niet ergens in gelooft, hoe vaak zijn er geen optimistische voorspellingen gedaan die nooit uitkwamen? Niet zo best dat er zo weinig scepsis is in dat wereldje. ]

"Whether computer understanding will ever be adequate to support full moral agency remains an open question. The problem that needs to be researched is whether there is morally relevant information that is inaccessible to systems lacking human-like understanding or consciousness. Is, for example, the ability to deal with the subtleties of others’ feelings dependent on empathy or intuitions of those feelings that would not be possible for a computer?" [mijn nadruk] (69)

[Ja.]

"Just as a computer system can represent emotions without having emotions, computer systems may be capable of functioning as if they understand the meaning of symbols without actually having what one would consider to be human understanding."(69)

[Een computersysteem kan helemaal geen 'emoties weergeven'. Dat je je wenkbrauwen kunt optrekken zoals Kismet, wat zegt dat helemaal? Dat is niet meer dan een imitatie van menselijk gedrag, een leuk speeltje, zoals een babypop 'mama' kan zeggen. Zelfs een kind heeft dan in de gaten dat dat niets met echte mensen of met werkelijke gevoelens te maken heeft. Maar de auteurs hebben de neiging om alles te reduceren tot de buitenkant, tot gedrag, en staan meteen te juichen als bij een robot iets aan de buitenkant lijkt op waarneembaar gedrag van mensen.

Het is het probleem van Westworld: William "Are you real?" Angela (die een robot is): "Well, if you can't tell, does it matter?" Ook dat antwoord focust op gedrag, op buitenkant. Zo'n humanoïde robot als Angela waarbij we geen verschil voelen met mensen is een leuke fantasie in een tv-serie, maar kan niet bestaan.]

(73) Chapter 5 - Philosophers, engineers, and the design of AMAs

"Many experts believe that military robots are likely to be the first place where AMAs will be needed."(73)

[Het DoD van de VS financiert dus allerlei onderzoek naar militaire robots. En de onderzoekers accepteren de grants alsof het allemaal niet uitmaakt. Je stelt dan dus geen vragen bij het idee oorlogsvoering. Nee, je werkt zonder enige kritiek mee met de militairen om AMAs te maken. Vanuit zo'n normatieve basis werken we dus aan AMAs? Nou, dat belooft wat. Maar als filosofen daar op zouden wijzen wordt dat natuurlijk genegeerd. Dus waarom hoofdstukken over of filosofie een bijdrage kan leveren? UIteraard blijkt het dan om filosofie te gaan die het bestaan van allerlei zaken al accepteert en de leiding volgt van de ingenieurs:]

"So in this and the next four chapters, we set aside the philosophical issues and focus on ways ethical considerations can be introduced into the platforms presently available." [mijn nadruk] (74)

"This emphasis on the practical may appear to philosophers as an oversimplification of ethics. We recognize that both ethical theory and applied ethics are full of complexity. Appreciation of the complexity is useful, insofar as it suggests ways of making computational systems more sophisticated. It is less useful if it is simply directed at dismissing the project of building AMAs."(76)

"With respect to computability, however, the moral principles proposed by philosophers leave much to be desired, often suggesting incompatible courses of action, or failing to recommend any course of action."(77)

"Given the range of perspectives regarding the morality of specific values, behaviors, and lifestyles, perhaps there is no single answer to the question of whose morality or what morality should be implemented in AI. Just as people have different moral standards, there is no reason why all computational systems must conform to the same code of behavior. One might envisage designing moral agents that conform to the values of a specific religious tradition or to one or another brand of secular humanism. Or the moral code for an AMA might be modeled on some standard for political correctness. Presumably, a robot could be designed to internalize the legal code of a country and strictly follow that country’s laws. This concession to culturally diverse AMAs is not meant to suggest that there are no universal values, only to acknowledge that there may be more than one path to the design of an AMA. Regardless of what code of ethics, norms, values, laws, or principles prevails in the design of an AMA, that system will have to meet externally determined criteria as to whether it functions successfully as a moral agent."(78-79)

[Nee, echt? Precies daarover zou nagedacht moeten worden. Maar wat we krijgen zal wel zoiets zijn als het overnemen van de vanzelfsprekende normatieve uitgangspunten van opdrachtgevers als de militairen.]

(83) Chapter 6 - Top-Down morality

"Despite the great enhancements in computing technology since Leibniz’s day, we think top-down theories may not serve to realize this dream. We’ll show that the prospects for implementing ethical rules as formal decision algorithms are rather dim. Nevertheless, people do appeal to top-down rules to inform and justify their actions, and designers of AMAs will need to capture this aspect of human morality. (...) The challenge facing commandment models is what to do when the rules conflict: is there some further principle or rule for resolving the conflict?"(83-84)

"The limitations of top-down approaches nevertheless add up, on our view, to the conclusion that it will not be feasible to furnish an AMA with an unambiguous set of top-down rules to follow. Not everyone agrees, and later we will discuss the important efforts of Susan and Michael Anderson with their MedEthEx system, which organizes three prima facie duties (respect for autonomy, beneficence, and nonmaleficience) into a consistent structure based on “expert” judgments. Susan Anderson believes that one consistent set of principles will emerge because she assumes that experts generally agree with each other. However, the same principles have been used throughout medical ethics, and there are countless situations where they lead to conflicting recommendations for action. We think that the task confronting AMAs is that of learning to deal with the inherently ambiguous nature of human moral judgment, including the fact that even experts can disagree.
How can machines operate successfully if things are as ambiguous as we say? For that matter, how do humans do it? Humans learn to distinguish the letter of the law from the spirit of the law. Humans identify the ability to deal with the incoherence and complexity of life, to find balance between knowing and doubting, as practical wisdom. Wisdom emerges from experience, from attentive doing and observing, from the integration of cognition, emotions, and reflection. ... Does the need for such wisdom mean that humans have to build affective/emotional capacities as well as reflective reasoning capacities into AMAs? Possibly, and we’ll discuss this topic in chapter 10." [mijn nadruk] (97)

[Alsof dat ooit mogelijk zou zijn.]

(99) Chapter 7 - Bottom-Up and developmental approachess

"Genes. development, and learning all contribute to the process of becoming a decent human being. The interaction between nature and nurture is, however, highly complex, and developmental biologists are only just beginning to grasp just how complex it is. Without the context provided by cells, organisms, social groups, and culture, DNA is inert. Anyone who says that people are “genetically programmed” to be moral (or psychopathic for that matter) has an oversimplified view of how genes work.
Genes and environment interact in ways that make it nonsensical to think that the process of moral development in children, or any other developmental process, can be discussed in terms of nature versus nurture. Developmental biologists now know that it is really both, or nature through nurture. A complete scientific account of moral evolution and development in the human species is a very long way off. And even if one had such an account, it is not clear how one could apply it to digital computers. Nevertheless, evolutionary and developmental ideas will continue to play a role in the design of AMAs." [mijn nadruk] (99)

[Dat zie je vaak in dit boek: we weten nog helemaal niet hoe het werkt, maar we passen het toch toe in de ontwikkeling van AMAs. Tegenstrijdig.]

"Simulating a child’s mind is only one of the strategies being pursued for the design of intelligent agents. In 1975, John Holland’s invention of genetic algorithms generated much excitement about the potential for evolving adaptive programs." [mijn nadruk] (100)

"Recognizing, however, that virtual worlds are no substitute for the challenges and complexities of the real world, roboticists have also adapted Alife techniques to help them design robots that operate in physical environments. This is the field now known as evolutionary robotics." [mijn nadruk] (100)

"Insofar as artificial babies and Alife both provide methods for generat- ing AMAs, they are examples of “bottom-up” approaches, in which system design is not explicitly guided by any top-down ethical theory. Traditional engineering approaches of testing and refining intelligent systems can also be thought of as following a bottom-up course of development. Different approaches have different strengths, weaknesses, and implicit biases, which we will attempt to describe in the rest of this chapter. We’ll begin with a dis- cussion of the evolution-inspired approaches before considering learning- based approaches to moral development." [mijn nadruk] (101)

"Nevertheless, there are hazards inherent in learning systems. The vision of learning systems developing naturally toward an ethical sensibility that values humans and human ethical concerns is an optimistic vision that sits in sharp contrast to the more dire futuristic predictions regarding the dangers AI poses."(110)

(117) Chapter 8: Merging top-down and bottom-up

"If neither a pure top-down approach nor a bottom-up approach is fully adequate for the design of effective AMAs, then some hybrid will be necessary. Furthermore, as noted, the top-down, bottom-up dichotomy is somewhat simplistic. Engineers commonly start with a top-down analysis of complex tasks to direct the bottom-up assembly of components." [mijn nadruk] (117)

[Dat je dingen ook niet kunt doen is nog niet tot de auteurs doorgedrongen. Ze komen hierna met de deugdenethiek van bijvoorbeeld Plato aanzetten.]

"Furthermore, as we will discuss in chapter 10, a moral agent may need to be embodied in the world, have access to emotions or emotion-like information, and have an awareness of social dynamics and customs if it is to function properly in many contexts."(118)

[Nou, ik wens u veel succes ... ]

"Just as utilitarians do not agree on how to measure utility, and deontologists do not agree on which list of duties apply, contemporary virtue ethicists do not agree on a standard list of virtues that any moral agent should exemplify."(119)

[Wat een verrassing! ... ]

"Rather than focusing on these differences, we will direct our attention to the computational tractability of virtue ethics: could one make use of virtues as a programming tool?"(119)

[Nee.]

(125) Chapter 9 - Beyond vaporware?

"Autonomous moral agents are coming. But where are they coming from? In this chapter, we describe software that is being designed with ethical competency in mind. Full AMAs are still “vaporware” — a promise no one knows how to fulfill. But software design has to start somewhere, and these projects provide the steam needed to drive the mental turbines that will generate further research.
In this chapter, we’ll canvass three general approaches to ethical software. Logic-based approaches attempt to provide a mathematically rigorous framework for modeling ethical reasoning in a rational agent. Case-based approaches explore various ways of inferring or learning ethically appro- priate behavior from examples of ethical or unethical behavior. Multiagent approaches investigate what happens when many agents following various ethical strategies interact with one another. It’s likely that there are other approaches than these three, but they are the only ones being applied where some research into actual coding has already commenced." [mijn nadruk] (125)

[Wat een ontzettend pretentieuze stelling. Het is puur geloof in techniek en nergens op gebaseerd.]

"Even though his robot isn’t deceptively cuddly, Scheutz worries about the ethical implications of adding an emotional cue. Is the robot’s rising voice pitch deceptive? The robot isn’t really stressed or afraid. It doesn’t really feel anything at all. But it may trick people into treating it as if it has such characteristics. This is the programmer’s ethical dilemma, not the robot’s, which itself is not being deceptive at all. If faking emotions has a positive ethical impact (to use Jim Moor’s term), perhaps the programmer is off the hook, so long as this implicit ethical agent is restricted to the narrow range of activities for which it is designed. A more autonomous moral agent would need to decide when deception is permissible and when it is not. But to our knowledge, no one is working on such sophisticated decision making." [mijn nadruk] (135)

(139) Chapter 10 - Beyond reason

"Is reasoning about morally relevant information all that is required for the development of an AMA? Even though Mr. Spock’s capacity to reason far exceeded that of Captain Kirk in the Star Trek series, the more emotional and intuitive Kirk was presumed by the crew of the Enterprise to be a better decision maker. Why? If (ro)bots are to be trusted, will they need additional faculties and social mechanisms, for example emotions, to adequately appreciate and respond to moral challenges? And if so, how will these abilities be integrated with the top-down and bottom-up approaches to moral decision making that we imagined a supportive ethicist providing to the engineering colleague who came looking for help?" [mijn nadruk] (139)

[Wat die eerste zin betreft: Alsof dat al niet moeilijk genoeg is gebleken. Al de in het vorige hoofdstuk gegeven benaderingen zijn in de praktijk tekort geschoten.]

"In this chapter, we will first outline the importance of suprarational faculties for moral decision making and then describe the tentative steps engineers are taking to implement emotions in artificial systems. In chapter 11, we’ll discuss hybrid systems, including those that have social skills and virtues."(139-140)

[Artificiële systemen kunnen principieel geen emoties hebben.]

"Just as Deep Blue II beat Gary Kasparov by playing chess in a manner different from the way a human would play, it is quite conceivable that an artificial agent might display moral judgment without utilizing the same cognitive or affective tools a human moral agent would apply."(142)

[Alsof je schaken en morele oordelen / beslissingen op één lijn kunt zetten.]

"Artificial intelligence engineers acknowledge that humans are a long way from knowing how to develop systems that can feel pleasure or pain, or have human-like emotions. The robots available today do not have nerves, neurochemicals, feelings, or emotions, nor is it likely that robots in the near future will. Nevertheless, sensory technology is an active area of research, and it is here that one might look for the foundations of feelings and emotions. Microphones and charged couple devices (found in digital cameras) are ubiquitous technologies that need no introduction from us. Some of the technological developments relating to the other senses may be less familiar." [mijn nadruk] (150)

(171) Chapter 11 - A more human-like AMA

"Any designer of an AMA architecture must therefore decide what components to include. Should AMAs have specific components dedicated to ethical sensibilities and reasoning? Or should these functions be carried out by more general mechanisms?"(171)

[Zinloze vragen als je het uitgangspunt niet deelt dat AMAs mogelijk zijn. ]

"An example of the first type of architecture is Ronald Arkin’s Army-funded project, discussed at the beginning of chapter 5. He is working on the problem of how to make robotic fighting machines capable of dealing with the complicated ethics of wartime behavior.(...) And irrespective of what one thinks about the morality of robotic fighting machines, we think that the four components of Arkin’s system could be adapted for other more benevolent applications."(171-172)

[Met even zinloze oplossingen, vooral ook als je dat soort militaire uitgangspunten niet deelt. Maar de auteurs vinden alles best. Zo is het nu eenmaal, zeggen ze.]

Voor de rest van het hoofdstuk Stan Franklin en zijn 'learning intelligent distribution agent (LIDA)'.

"The approach to building AMAs outlined in this chapter differs from the approaches described in chapter 9. There we surveyed software projects that focus on one aspect of moral decision making. Here we have an approach that provides a general architecture for combining multiple kinds of morally relevant considerations. However, at this stage, LIDA is only partially implemented and is largely a conceptual model."(187)

(189) Chapter 12 - Dangers, rights, responsibilities

Eerst zetten de auteurs de enthousiastelingen over intelligente robots (mensen als Kurzweil en Moravec) tegenover de skeptici.

"Which futuristic visions are likely within the near future (twenty to fifty years) and which are speculative fantasies? For every Ray Kurzweil prophesying that the Singularity (a point when AI exceeds human intelligence) is near, there are perhaps two equally noted scientists dubious of such claims. Scientists who believe it is inevitable that humans will create advanced forms of AI differ on how soon strong AI will be possible (ten to two hundred years), while those skeptical of the entire enterprise differ on whether it is possible. The skeptics emphasize the difficulty of the technological challenges that must be surmounted, while the believers are more likely to downplay it. The true believers tend to gloss over the ethical challenges that will be entailed in building AMAs, while the skeptics, to our minds, seem more sensitive to the risk that the systems that are built may acquire and act on values that are less than benign. This is certainly a generalization, but when the believers discuss the ethics of AI systems with superior intelligence, they tend toward dubious or naive assumptions as to why humans will be able to trust the beneficence of such systems. In these differences, we may just be observing psychological orientations (the cup is half full or half empty) and the need for those who identify with grand challenges to be optimistic about the social benefits that will be derived from their projects." [mijn nadruk] (190)

[Er zijn heel wat meer redenen waarom aanhangers van sterke AI dat zijn. Dat zou eens goed geanalyseerd moeten worden. Het zijn gevoelsarme mannen met weinig empathie voor andere mensen. Ze hebben bedrijven of werken voor bedrijven, dus ze hebben allerlei belangen bij dat standpunt. Die standpunten zijn ingebed in een kapitalistische Amerikaanse samenleving. En zo verder. In het vervolg worden allerlei standpunten naar voren gebracht over de mogelijkheden van AI, intelligente computers, de angst ervoor, het bijna religieuze geloof er in, etc.]

"Peter Norvig, director of research at Google and coauthor of the classic modern textbook Artificial Intelligence: A Modern Approach, is among those who believe that morality for machines will have to be developed alongside AI and should not be solely dependent on future advances. By now, it should be evident that this is also how we view the challenge of developing moral machines. Fears that advances in (ro)botic technology might be damaging to humanity underscore the responsibility of scientists to address moral considerations during the development of systems."(193-194)

"We do not pretend to be able to predict the future of AI. Nevertheless, the more optimistic scenarios are, to our skeptical minds, based on assumptions that border on blind faith. (...) However, we agree that systems with a high degree of autonomy, with or without superintelligence, will need to be provided with something like human-friendly motivations or a virtuous character. Unfortunately, there will always be individuals and corporations who develop systems for their own ends. That is, the goals and values they program into (ro)bots may not serve the good of humanity. Those who formulate public policy will certainly direct attention to this prospect. It would be most helpful if engineers took the potential for misuse into account in designing advanced AI systems. The development of systems without appropriate ethical restraints or motivations can have far-reaching consequences, even when (ro)bots have been developed for socially beneficial ends."(194-195)

[Grappig dat de auteurs zichzelf hier 'skeptical minds' noemen. They could have fooled me! Het boek zit immers vol met een blind geloof in de mogelijkheid en wenselijkheid van AMA's, in de mogelijkheid bijvoorbeeld dat je machines kunt voorzien van die 'ethical restraints'. Ik denk dat we er beter aan doen de makers van machines te voorzien van 'ethical restraints' door ze aan strenge regulatie te onderwerpen. Maar in een kapitalistische samenleving zal dat dus nooit gebeuren. Het is naïef om te denken dat robots en zo ontworpen worden 'for the good of humanity', ik denk dat ze eerder geproduceerd zullen worden voor de diepe zakken van de aandeelhouders.]

"The fear that future systems could not be restrained adequately and could be destructive to humans leads a few critics to suggest that research into advanced AI should be stopped before it gets out of hand. We’ll address the public policy challenges posed by AI later. But first, let’s look at the criteria for designating (ro)bots as moral agents and whether they may some day be deserving of civil and legal rights."(197)

[Dat laatste gaat al weer uit van dat het mogelijk is om AMA's te maken. Maar machines zijn nooit personen en dus niet verantwoordelijk voor wat ze uitvoeren. De mensen die ze maken zijn dat.]

"However, as (ro)bots become more sophisticated, two questions may arise in the political arena. Can the (ro)bots themselves, rather than their manufacturers or users, be held directly liable or responsible for damages? Do sophisticated (ro)bots deserve any recognition of their own rights?"(208)

[Het antwoord is een simpel nee.]

"The adoption of mechanical devices, robots, and virtual reality by the sex industry is nothing new, and while some are offended by such practices, governments in democratic countries have largely turned away from trying to legislate the private practices of the individuals who use these products. However, other social practices are likely to ignite public debate. Examples are the rights of humans to marry robots and of (ro)bots to own property." [mijn nadruk] (210)

[Waarom zou je trouwen? Waarom zou je bezit hebben? Dat alleen al. Maar dit is weer zo'n voorbeeld van het behandelen van machines als personen door mensen die het blijkbaar niet kunnen laten om machines te idealieren.]

"In addition, the political arena can be especially unpredictable and chaotic when confronted with issues in which many different constituencies have a stake. Because of the commercial forces involved in (ro)botics, and because it is hard to know exactly what directions the technological developments will take, it is clear that there will be many stakeholders." [mijn nadruk] (211)

[Dat wordt bijna in een bijzin aangeduid, terwijl het één van de meest belangrijke factoren is voor toekomstige ontwikkelingen, een factor waarop de politiek ook vaak geen invloed heeft of wil hebben. Commerciële belangen maken vaak ook dat gevaren en risico's, ongelukken en dergelijke onder het tapijt worden geveegd omdat dat geld zou kosten, tot reputatieschade zou leiden, en zo verder.]

(215) Epilogue - (Ro)bot minds and human ethics

"The quest to develop AMAs will also feed back on ethical understanding by providing a platform for experimental investigation. For instance, by tinkering with the correspondence between what is said, what is done, and what is conveyed by nonverbal means, researchers will be able to systematically test how words, deeds, and gestures interact to shape ethical judgment. And by simulating the interactions among agents with different ethical viewpoints, it will be possible to supplement the speculative thought experiments of science fiction and philosophy with testable social and cognitive models."(216-217)

[Nou, fijn.]