In an article in the current issue of The Guardian, Philosophy Professor Adina Roskies (also Chair of the Cognitive Science Program at Dartmouth) explores some of the ethical issues involved in brain-computer interface (BCI) technology, which integrates cognitive activity with a computer. “When BCIs decode neural activity into some sort of action [like moving a robot arm] an algorithm is included in the cognitive process,” she explains. “As these systems become more complex and abstract, it might become unclear as to who the author of some action is, whether it is a person or machine.” While “in the case where people can’t articulate their own words, these systems will help produce some sort of verbalization that they presumably want to produce.... Given what we know about predictive systems, you can at least imagine cases in which these things produce outputs that don’t directly reflect what the person intended.” And although we are still a fair way off such a reality, Roskies feels, "the time to start thinking through some of the ethical implications of these systems is now." Read the whole article here.