[hr_caption]Introduction[/hr_caption]
The pace of change is speeding up each day in the media and online: cloning, genetics, genetically modified crops, nanotechnologies or artificial intelligence, not one month passes without hearing new protests of anger at scientific progress! Most frequently, religious or anti-technology groups speak in the name of "ethics" or with irrefutable arguments such as "men should not play God". Of course, the arguments developed by these "righteous people" are either based on conspiracy theories, where the absence of proof becomes the proof, or on quotes from famous influencers or scientific personalities, often taken out of context.
Often, these arguments are not based on scientific arguments or proof: more often it's superstition, religion, an uncontrollable fear of the unknown or simple idiocy that motivates these actions and words, more than ethics or "common sense", which they cannot even define.
Today, they are the same kind of individuals that some centuries ago called for the burning of "witches" and "wizards" and other "creatures of the devil", and slowed down humankind's technological and scientific development. They are also the same people that end up serving the future they are trying to fight: a future in which humankind is extinct.
And yet, since Wednesday 2 August 2017, humans have already consumed more resources than the earth can provide in a year; how can we think that anything but scientific progress can help us overcome future challenges which put our species at risk of extinction?
[hr_caption]The root of the problem is the media's doom-mongering interpretation[/hr_caption]
It made the headlines: Elon Musk worried about the unbridled progress of Artificial Intelligence and calling for its regulation, and Stephen Hawking predicting the end of humanity "soon". During an interview, he mentioned a number of potential threats: robots with artificial intelligence, climate change, genetically modified viruses and extra-terrestrial attacks.
Before panicking, first we must put these words back into context to have a clearer view, especially when speaking of extraordinary characters that I personally admire.
When Stephen Hawking announced that the end of humankind would be soon "soon", his declaration must be read to the end to understand that scientists, particularly those who study the universe, have a very different relationship to time than those who quote them. Hawking's words were, more precisely, " I don't think we will survive another 1000 years without escaping beyond our fragile planet", stating later that " we must continue to go into space for the future of humanity " to find new planets to colonise. Put like that, it no longer sounds like the anti-scientific plea that some would like us to believe, on the contrary, his opinion is that that science is the only solution for the survival of humanity.
Now let's go back to Elon Musk's words, in which he states that AI (Artificial Intelligence, although he really is referring to the emergence of artificial conscience, rather than AI, which has been mastered for years and is used in Tesla cars, like the ones Elon manufactures) represents a major risk to our civilisation. Of course, Elon Musk is right! Artificial Intelligence (AI) does represent a major risk to our civilisation, as is any other technology that is likely to create technological singularity! (nb: technological singularity is change that is sufficiently great as to trigger runaway technological growth causing unpredictable major changes on human civilisation.)
Several scenarios can lead to singularity: we can cite the mastery of quantum technology, the creation of the first nanoconstructor, downloading of the human mind in a computer, attaining immortality, the emergence of artificial conscience, as well as an encounter with an advanced extra-terrestrial civilisation.
Each change represents a major risk to civilisation as we know it; that is the major difference we must bear in mind.
Indeed, don't forget that our civilisation has already survived similar upheavals in the past! Ask yourself the following questions: do you regret our prehistoric past or the middle ages, where living conditions were so harsh that life expectancy was half what it is today? Do you regret the discovery of electricity, printing, modern medicine, the internal combustion engine or Internet? Do you think our civilisation is so perfect today that it should be protected at all costs against the changes or developments that progress could bring?
Well, my answer is no! Changes, evolution: you must not be afraid of them, you should want them! Throughout history, humanity was able to give its best and outdo itself in periods of major change, when facing the gravest of challenges. Don't forget that we are already using many technologies that in their time caused the same concerns as Artificial Intelligence or nanotechnologies, sometimes even worse.Did you know that before the first nuclear bomb exploded, many scientists believed that the ensuing chain reaction would spread to the atmosphere and reduce our world to ashes?
Of course the idea here is not to champion the nuclear bomb: however, technology such as nuclear medicine is massively used now, particularly in medical imagery and radiotherapy. Could they have existed if a world committee had been charged with regulating research on radioactivity because of the risks (because radioactivity is dangerous by nature and not only in a bomb)? How many deadly poisons have been used to develop new treatments?
Never forget that, by definition, regulations are only applied by those who submit to them: do we really want to see much less democratic countries gain considerable technological advance and find ourselves in economic and technological underdevelopment, which we will then have to pay for? How many people will die in our hospitals or hospices because clinical tests are too strictly regulated? Currently, 7 to 10 years of R&D is necessary before a new medicine becomes available on the market: don't these regulations slow down medical advances more than they protect patients?
Although I believe that precautions should be taken in some fields such as vaccinations and treatment for some illnesses that are not lethal in the short term, to study the medium and long term risks, why allow thousands of people declared "terminal", with only a few months to live, to die, when experimental treatments exist and could save some?
Some will reply: " what about the long term consequences of treatment on some patients". BUT WE DON'T CARE! Patients in the terminal stage are dying: it is just a matter of days or months. Let them have the choice to take risks to save their lives, rather than condemn them by default!
Back to the main subject, the self-righteous and anti-tech brigade that inspired this post: the most moderate are now campaigning for the creation of committees and other controls: why not, but let's not forget history. The latter shows that in the event of a major incident, the organisations are soon blamed and are exposed to significant risk. That is how finally, the choice of safety (their own) becomes the only possible option after a few months of work, in the same way (meaning permanently) that we are constantly hit with weather warnings because one day, there was no warning when the risk was judged to be too low, and the storm hit despite this: when it comes to prevention, the media spares no one, especially not scientists!
Finally, do not forget that, unlike what we often hear from anti-progress militants in the media, technology has never been a way for the powerful to dominate the weak: history, our history, has shown that on the contrary, technological advances have always announced, followed or supported freedom.
Progress cannot be stopped under false ethical pretexts: that is the real risk for our civilisation. In essence, that which stops progressing usually ends up completely halted.
[hr_caption]Conclusion[/hr_caption]
Today, each one of us must take responsibility. Even if you believe " it was better before", it is too late now to go back: we use too many resources for our small planet, and the only thing that can save our species is science! Science enables us to fight efficiently against pollution, deforestation and illness; science can save humankind from the extinction that almost all species are programmed for from the moment they first appear.
Unlike some pessimists, I remain convinced that modern technologies, such as genetic engineering, information technologies and pharmaceuticals, anticipating future capacities, including nanotechnologies, artificial intelligence, downloading brain data into a computer and/or the opposite, as well as the colonisation of space, herald a brilliant future for our species.
To anticipate the future, it isn't enough to base projections on current data: it is crucial to take into account potential spectacular technological progress. This is why, over the years, the date at which we will no longer have petrol has always been pushed back, as technologies enabling us to use less have improved.
It will indeed be catastrophic if the potential advantages of the future do not materialise because of anti-tech campaigners or useless prohibitions. Let's not forget that if in the past humanity had stopped when facing with these types of fears, inventions such as language, writing, printing, electricity, industrialisation, modern medicine and the internet would have never existed.
The world is such that the voice of protest is often the loudest, when in reality, it is rarely the voice of the majority. By definition, the silent majority keeps quiet. Perhaps it is time for change and for another view to be heard: a voice for the future; a voice to defend another way, in which humanity can transcend itself and give its best... if it stops being afraid and commits to it!
Christophe Casalegno
You can follow me on : Twitter | Facebook | Linkedin | Telegram