|Image Source: Comic Book dot com - Star Trek|
If you're fan enough as I am to pay for the CBS streaming service (it has some benefits: Young Sheldon and the umpteenth reboot of The Twilight Zone hosted by Oscar winner Jordan Peele), the AI in Starfleet's "Control" looks an awful lot like...The Borg. I've enjoyed the latest iteration immensely, and I'm rooting for at least a season 3.
There's already speculation on Screen Rant that this might be some sort of galactic "butterfly effect." Discovery has taken some license with my previous innocence even before Section 31: we're obviously not "the good guys" with phasers, technobabble and karate chops as I once thought.
That of course has been the nature of speculative fiction since Mary Shelley penned Frankenstein: that playing God, humanity would manage to create something that just might kill us. Various objects from nuclear power to climate change has taken on this personification. I've often wondered if intelligence is its own Entropy. Whole worlds above us might be getting along just fine without a single invention of language, science, tools, cities or spaceflight, animal species living and dying without anything more than their instinct, hunger and the inbred need to procreate unless a meteor sends them into extinction. Homo sapien or homo stultus...
It is the Greek word mimesis we translate to mean "imitate" but can actually be more accurately said as "re-presentation." It is the Plato-Aristotle origin of the colloquial phrase "art imitates life."
Re-presented for your consumption and contemplation:
Yoshua Bengio is one of three computer scientists who last week shared the US$1-million A. M. Turing award — one of the field’s top prizes.
The three artificial-intelligence (AI) researchers are regarded as the founders of deep learning, the technique that combines large amounts of data with many-layered artificial neural networks, which are inspired by the brain. They received the award for making deep neural networks a “critical component of computing”.
The other two Turing winners, Geoff Hinton and Yann LeCun, work for Google and Facebook, respectively; Bengio, who is at the University of Montreal, is one of the few recognized gurus of machine learning to have stayed in academia full time.
But alongside his research, Bengio, who is also scientific director of the Montreal Institute for Learning Algorithms (MILA), has raised concerns about the possible risks from misuse of technology. In December, he presented a set of ethical guidelines for AI called the Montreal declaration at the Neural Information Processing Systems (NeurIPS) meeting in the city.
Do you see a lot of companies or states using AI irresponsibly?
There is a lot of this, and there could be a lot more, so we have to raise flags before bad things happen. A lot of what is most concerning is not happening in broad daylight. It’s happening in military labs, in security organizations, in private companies providing services to governments or the police.
What are some examples?
Killer drones are a big concern. There is a moral question, and a security question. Another example is surveillance — which you could argue has potential positive benefits. But the dangers of abuse, especially by authoritarian governments, are very real. Essentially, AI is a tool that can be used by those in power to keep that power, and to increase it.
AI pioneer: ‘The dangers of abuse are very real’
Yoshua Bengio, winner of the prestigious Turing award for his work on deep learning, is establishing international guidelines for the ethical use of AI.
Davide Castelvecchi, Nature