3

A September 2018 Article from The Sun (promulgated by Fox News) article is headlined:

EVIL BOTS Robots can become ‘racist and sexist’ all on their own, study finds

That study was based on a "give and take" system in which it ended up resulting that robots would donate to each other in small groups, even shunning others.

I haven't been able to see the whole study, but I'm skeptic about the claim because:

  1. Although I'm no expert in robotics, I think that for a robot to be able to develop prejudices would very likely mean that robot brains would be already extremely near to a human one.

  2. That they would share things in little groups doesn't necessarily mean that they do that because they develop the prejudice, maybe the own system favored that giving and taking in small groups would likely give bigger gain.

Anyway maybe robotics is already advanced enough for that, so I'm posting.

Have robots become racist and sexist?

Brythan
  • 10,162
  • 5
  • 46
  • 53
user2638180
  • 2,396
  • 3
  • 17
  • 27
  • Where is the required "notable claim"? – Daniel R Hicks Sep 16 '18 at 13:51
  • @DanielRHicks, that robots can develop prejudices on their own isn't a notable claim? I think it's a big one. – user2638180 Sep 16 '18 at 13:54
  • Neither Fox news nor the Sun is credible, absent hard references to real research. And note that "virtual simulated robots" is a rather meaningless term. – Daniel R Hicks Sep 16 '18 at 14:24
  • 11
    @DanielRHicks: The notability claim does not have to be to a credible (in our eyes) source. It is supposed to demonstrate (amongst other things) that it is likely that many people believe it. Many people express trust in Fox News and the Sun. – Oddthinking Sep 16 '18 at 14:25
  • @Oddthinking - Well, the source should at least be intelligible. The Sun article is pure gobbledygook. And it entirely misses the point that the real study was of human prejudice, not robotics. – Daniel R Hicks Sep 16 '18 at 14:34
  • 4
    @DanielRHicks: That it misses the point makes it a good question for Skeptics.SE (although I admit to having a soft spot for the questions where the answer is "Yes, the claim is right"). Your understanding that it is gobbledygook probably puts you at an advantage of most newspaper readers who have little familiarity with the subject matter. – Oddthinking Sep 16 '18 at 18:20

2 Answers2

18

The original article is available:

Indirect Reciprocity and the Evolution of Prejudicial Groups, Roger M. Whitaker, Gualtiero B. Colombo & David G. Rand, Scientific Reports Volume 8, Article number: 13247 (2018)

The article is not about robots (i.e. machines that can carry out actions) and doesn't even mention robots. It mentions sexism and racial extremism only once - in the first paragraph of the introduction (where it also manages to tie in support for Brexit).

The article is about evolutionary game theory, and shows that certain versions of "in-group favoritism" can be evolved in computer models that have been designed to show that precisely that.

Oddthinking
  • 140,378
  • 46
  • 548
  • 638
  • 6
    It's not clear what "all on their own" means in this context. The experimenter went out of their way to demonstrate that it is possible for such behaviour to evolve, given the perfect assumptions and conditions for it to evolve. [I am not attacking the paper, just arguing the claim is meaningless.] – Oddthinking Sep 16 '18 at 14:19
-4

GIGO (Garbage In, Garbage Out): Bias consciously or unconsciously programmed in will become apparent. AI data is self-generating and, in theory, could worsen or lessen the biases over time, depending on the algorithm.

Design News: Bias In, Bias Out: How AI can become racist

New York Times Opinion: Artificial Intelligence’s White Guy Problem, 26 June 2016

Possum-Pie
  • 185
  • 6
  • 4
    Your answer would had been better received if you had been a little more verbose. Right now it looks more like a headline for an answer you didn't write. Check *Oddthinking*'s answer, which says the same (I guess?) you were saying, but more clearly. – Rekesoft Sep 19 '18 at 08:41
  • @Rekesoft, thanks. I'm new to StackExchange, and quickly discovered that longer answers tend to be picked apart more. I had a lot more in my original answer about how the question boils down to "can a computer learn racism" but that was picked apart, so I edited it out.. We can all see where that got me! – Possum-Pie Sep 19 '18 at 10:36