AI Ethics: The Perils of Creating an Artificial Intelligence Species
Star Lab CEO Pranav Mistry unveiled a squad of android robots called Neons at CES 2020. Their arrival has re-sparked conversations about AI ethics. Every time a new artificial intelligence model wanders into human territory, the AI ethics debate blooms, then wilts. This roller-coaster cycle of public interest has us wondering: Can ethics concerns remain on the front burner of AI advancements?
Responsible Parties
Harvard Magazine breaks down the legal and ethical concerns regarding AI-human interactions.
By way of example, let’s say that a self-navigating car strikes and kills a pedestrian. If this were a human-on-human interaction, authorities would likely charge one of the parties with manslaughter or vehicular homicide. But when a self-driving car strikes, who’s responsible?
The interface between human and machine is ripe with ethical dilemmas that ultimately hinge on one thing: ownership. Is the company that manufactures the device responsible for unintended, negative outcomes? The individuals who wrote the code? Or is it the responsibility of the behavioral scientists whose work led to the incorrect decision?
AI Ethics of Learning From Data
AI’s ability to make decisions without humans poses another ethical problem. Aside from data collection quandaries, ethicists have not reached a consensus on whether or not AI and machine learning is, in it of itself, an ethical violation.
The problem is that machine learning obscures the boundaries of responsibility. In the days of old, humans exclusively programmed AI devices. When programming problems arose, engineers could isolate the offending code and make changes. Machine learning, however, relies on automated programs that parse data and arrive at algorithmic decisions, and the parameters of that decision are purposely vague to allow for flexibility and machine decision-making.
Purposeful Droids
If the responsibility puzzle is ever solved, and if ethicists determine that AI and machine learning are not ethical violations of the human code of conduct, there’s still another hurdle: need and purpose.
Mistry envisions the droids replacing TV anchors, receptionists, and actors. But to some, Mistry’s desire to create a new species is disconcerting. Since our current species isn’t performing optimally, the introduction of a new humanoid-esque subspecies may present serious problems. With $22 billion of funding behind AI projects like Neon, CNET correspondent Rae Hodges posits that we may want to funnel our dollars elsewhere. Hodges questions “the ethics of creating a sentient life form on a planet where billions of animals are currently burning to death in searing contortions thanks to climate change and wildfires.”
Is a society consumed with wealth accumulation at the expense of nature capable of creating a subspecies that’s ethically sophisticated? Or maybe the rise of AI beings will coincide with the next step of human enlightenment? Whichever the case, AI is here, and it’s not going anywhere.