To revisit this article, visit My Profile, then View saved stories .

  • Backchannel
  • Newsletters
  • WIRED Insider
  • WIRED Consulting

If you buy something using links in our stories, we may earn a commission. Learn more.

Klint Finley

Technology That Could End Humanity—and How to Stop It

Nick Bostrom

In his 1798 An Essay on the Principle of Population , Thomas Malthus predicted that the world's population growth would outpace food production, leading to global famine and mass starvation. That hasn't happened yet. But a report from the World Resources Institute last year predicts that food producers will need to supply 56 percent more calories by 2050 to meet the demands of a growing population.

It turns out some of the same farming techniques that staved off a Malthusian catastrophe also led to soil erosion and contributed to climate change, which in turn contributes to drought and other challenges for farmers. Feeding the world without deepening the climate crisis will require new technological breakthroughs.

This situation illustrates the push and pull effect of new technologies. Humanity solves one problem, but the unintended side effects of the solution create new ones. Thus far civilization has stayed one step ahead of its problems. But philosopher Nick Bostrom worries we might not always be so lucky.

If you've heard of Bostrom, it's probably for his 2003 " simulation argument " paper which, along with The Matrix , made the question of whether we might all be living in a computer simulation into a popular topic for dorm room conversations and Elon Musk interviews. But since founding the Future of Humanity Institute at the University of Oxford in 2005, Bostrom has been focused on a decidedly more grim field of speculation: existential risks to humanity. In his 2014 book Superintelligence , Bostrom sounded an alarm about the risks of artificial intelligence. His latest paper, The Vulnerable World Hypothesis , widens the lens to look at other ways technology could ultimately devastate civilization, and how humanity might try to avoid that fate. But his vision of a totalitarian future shows why the cure might be worse than the cause.

WIRED: What is the vulnerable world hypothesis?

Nick Bostrom: It's the idea that we could picture the history of human creativity as the process of extracting balls from a giant urn. These balls represent different ideas, technologies, and methods that we have discovered throughout history. By now we have extracted a great many of these and for the most part they have been beneficial. They are white balls. Some have been mixed blessings, gray balls of various shades. But what we haven't seen is a black ball, some technology that by default devastates the civilization that discovers it. The vulnerable world hypothesis is that there is some black ball in the urn, that there is some level of technology at which civilization gets decimated by default.

WIRED: What might be an example of a "black ball?”

NB: It looks like we will one day democratize the ability to create weapons of mass destruction using synthetic biology. But there isn't nearly the same kind of security culture in biological sciences as there is nuclear physics and nuclear engineering. After Hiroshima, nuclear scientists realized that what they were doing wasn't all fun and games and that they needed oversight and a broader sense of responsibility. Many of the physicists who were involved in the Manhattan Project became active in the nuclear disarmament movement and so forth. There isn't something similar in the bioscience communities. So that's one area where we could see possible black balls emerging.

WIRED: People have been worried that a suicidal lone wolf might kill the world with a "superbug" at least since Alice Bradley Sheldon's sci-fi story " The Last Flight of Doctor Ain ," which was published in 1969. What's new in your paper?

The Snowflake Attack May Be Turning Into One of the Largest Data Breaches Ever

By Matt Burgess

Microsoft’s Recall Feature Is Even More Hackable Than You Thought

By Andy Greenberg

Microsoft Will Switch Off Recall by Default After Security Backlash

By Matthew Gault

NB: To some extent, the hypothesis is kind of a crystallization of various big ideas that are floating around. I wanted to draw attention to different types of vulnerability. One possibility is that it gets too easy to destroy things, and the world gets destroyed by some evil doer. I call this "easy nukes." But there are also these other slightly more subtle ways that technology could change the incentives that bad actors face. For example, the "safe first strike scenario," where it becomes in the interest of some powerful actor like a state to do things that are destructive because they risk being destroyed by a more aggressive actor if they don't. Another is the "worse global warming" scenario where lots of individually weak actors are incentivized to take actions that individually are quite insignificant but cumulatively create devastating harm to civilization. Cows and fossil fuels look like gray balls so far, but that could change.

I think what this paper adds is a more systematic way to think about these risks, a categorization of the different approaches to managing these risks and their pros and cons, and the metaphor itself makes it easier to call attention to possibilities that are hard to see.

WIRED: But technological development isn't as random as pulling balls out of an urn, is it? Governments, universities, corporations, and other institutions decide what research to fund, and the research builds on previous research. It's not as if research just produces random results in random order.

NB: What's often hard to predict is, supposing you find the result you're looking for, what result comes from using that as a stepping stone, what other discoveries might follow from this and what uses might someone put this new information or technology to.

In the paper I have this historical example of when nuclear physicists realized you could split the atom, Leo Szilard realized you could make a chain reaction and make a nuclear bomb. Now we know to make a nuclear explosion requires these difficult and rare materials. We were lucky in that sense.

And though we did avoid nuclear armageddon it looks like a fair amount of luck was involved in that. If you look at the archives from the Cold War it looks like there were many occasions when we drove all the way to the brink. If we'd been slightly less lucky or if we continue in the future to have other Cold Wars or nuclear arms races we might find that nuclear technology was a black ball.

If you want to refine the metaphor and make it more realistic you could stipulate that it's a tubular urn so you've got to pull out the balls towards the top of the urn before you can reach the balls further into the urn. You might say that some balls have strings between them so if you get one you get another automatically, you could add various details that would complicate the metaphor but would also incorporate more aspects of our real technological situation. But I think the basic point is best made by the original perhaps oversimplified metaphor of the urn.

WIRED: So is it inevitable that as technology advances, as we continue pulling balls from the urn so to speak, that we'll eventually draw a black one? Is there anything we can do about that?

NB: I don't think it's inevitable. For one, we don't know if the urn contains any black balls. If we are lucky it doesn't.

If you want to have a general ability to stabilize civilization in the event that we should pull out the black ball, logically speaking there are four possible things you could do. One would be to stop pulling balls out of the urn. As a general solution, that's clearly no good. We can't stop technological development and even if we did, that could be the greatest catastrophe at all. We can choose to deemphasize work on developing more powerful biological weapons. I think that's clearly a good idea, but that won't create a general solution.

The second option would be to make sure there are there is nobody who would use technology to do catastrophic evil even if they had access to it. That also looks like a limited solution because realistically you couldn't get rid of every person who would use a destructive technology. So that leaves two other options. One is to develop the capacity for extremely effective preventive policing, to surveil populations in real time so if someone began using a black ball technology they could be intercepted and stopped. That has many risks and problems as well if you're talking about an intrusive surveillance scheme, but we can discuss that further. Just to put everything on the map, the fourth possibility would be effective ways of solving global coordination problems, some sort of global governance capability that would prevent great power wars, arms races, and destruction of the global commons.

WIRED: That sounds dystopian. And wouldn't that sort of one-world government/surveillance state be the exact sort of thing that would motivate someone to try to destroy the world?

NB: It's not like I'm gung-ho about living under surveillance, or that I'm blind about the ways that could be misused. In the discussion about the preventive policing, I have a little vignette where everyone has a kind of necklace with cameras. I called it a "freedom tag." It sounds Orwellian on purpose. I wanted to make sure that everybody would be vividly aware of the obvious potential for misuse. I'm not sure every reader got the sense of irony. The vulnerable world hypothesis should be just one consideration among many other considerations. We might not think the possibility of drawing a black ball outweighs the risks involved in building a surveillance state. The paper is not an attempt to make an all things considered assessment about these policy issues.

WIRED: What if instead of focusing on general solutions that attempt to deal with any potential black ball we instead tried to deal with black balls on a case by case basis?

NB: If I were advising a policymaker on what to do first, it would be to take action on specific issues. It would be a lot more feasible and cheaper and less intrusive than these general things. To use biotechnology as an example, there might be specific interventions in the field. For example, perhaps instead of every DNA synthesis research group having their own equipment, maybe DNA synthesis could be structured as a service, where there would be, say, four or five providers, and each research team would send their materials to one of those providers. Then if something really horrific one day did emerge from the urn there would be four or five choke points where you could intervene. Or maybe you could have increased background checks for people working with synthetic biology. That would be the first place I would look if I wanted to translate any of these ideas into practical action.

But if one is looking philosophically at the future of humanity, it's helpful to have these conceptual tools to allow one to look at these broader structural properties. Many people read the paper and agree with the diagnosis of the problem and then don't really like the possible remedies. But I'm waiting to hear some better alternatives about how one would better deal with black balls.

  • My wild ride in a robot race car
  • The existential crisis plaguing extremism researchers
  • The plan to dodge a killer asteroid— even good ol' Bennu
  • Pro tips for shopping safe on Amazon
  • “If you want to kill someone, we are the right guys ”
  • 🏃🏽‍♀️ Want the best tools to get healthy? Check out our Gear team's picks for the best fitness trackers , running gear (including shoes and socks ), and best headphones .
  • 📩 Get even more of our inside scoops with our weekly Backchannel newsletter

science and technology can be build and destroy humanity essay

Steven Levy

Generative AI Is Totally Shameless. I Want to Be It

Amanda Hoover

Maven Is a New Social Network That Eliminates Followers&-and Hopefully Stress

Matthew Hutson

Tesla’s Controversial Factory Expansion Is Approved

Morgan Meaker

Lab-Grown Meat Is on Shelves Now. But There’s a Catch

Matt Reynolds

Crypto Astrologers See Price Moves in the Stars

Brooke Knisley

It’s Time to Believe the AI Hype

  • Future Perfect

How technological progress is making it likelier than ever that humans will destroy ourselves

The “vulnerable world hypothesis,” explained.

by Kelsey Piper

The mushroom cloud from a nuclear explosion.

Technological progress has eradicated diseases, helped double life expectancy, reduced starvation and extreme poverty, enabled flight and global communications, and made this generation the richest one in history.

It has also made it easier than ever to cause destruction on a massive scale. And because it’s easier for a few destructive actors to use technology to wreak catastrophic damage, humanity may be in trouble.

This is the argument made by Oxford professor Nick Bostrom, director of the Future of Humanity Institute, in a new working paper, “ The Vulnerable World Hypothesis .” The paper explores whether it’s possible for truly destructive technologies to be cheap and simple — and therefore exceptionally difficult to control. Bostrom looks at historical developments to imagine how the proliferation of some of those technologies might have gone differently if they’d been less expensive, and describes some reasons to think such dangerous future technologies might be ahead.

In general, progress has brought about unprecedented prosperity while also making it easier to do harm. But between two kinds of outcomes — gains in well-being and gains in destructive capacity — the beneficial ones have largely won out. We have much better guns than we had in the 1700s, but it is estimated that we have a much lower homicide rate , because prosperity, cultural changes, and better institutions have combined to decrease violence by more than improvements in technology have increased it.

But what if there’s an invention out there — something no scientist has thought of yet — that has catastrophic destructive power, on the scale of the atom bomb, but simpler and less expensive to make? What if it’s something that could be made in somebody’s basement? If there are inventions like that in the future of human progress, then we’re all in a lot of trouble — because it’d only take a few people and resources to cause catastrophic damage.

That’s the problem that Bostrom wrestles with in his new paper. A “vulnerable world,” he argues, is one where “there is some level of technological development at which civilization almost certainly gets devastated by default.” The paper doesn’t prove (and doesn’t try to prove) that we live in such a vulnerable world, but makes a compelling case that the possibility is worth considering.

Progress has largely been highly beneficial. Will it stay that way?

Bostrom is among the most prominent philosophers and researchers in the field of global catastrophic risks and the future of human civilization. He co-founded the Future of Humanity Institute at Oxford and authored Superintelligence , a book about the risks and potential of advanced artificial intelligence. His research is typically concerned with how humanity can solve the problems we’re creating for ourselves and see our way through to a stable future.

When we invent a new technology, we often do so in ignorance of all of its side effects. We first determine whether it works, and we learn later, sometimes much later, what other effects it has. CFCs, for example, made refrigeration cheaper, which was great news for consumers — until we realized CFCs were destroying the ozone layer, and the global community united to ban them.

On other occasions, worries about side effects aren’t borne out. GMOs sounded to many consumers like they could pose health risks, but there’s now a sizable body of research suggesting they are safe.

Bostrom proposes a simplified analogy for new inventions:

One way of looking at human creativity is as a process of pulling balls out of a giant urn. The balls represent possible ideas, discoveries, technological inventions. Over the course of history, we have extracted a great many balls—mostly white (beneficial) but also various shades of grey (moderately harmful ones and mixed blessings). The cumulative effect on the human condition has so far been overwhelmingly positive, and may be much better still in the future. The global population has grown about three orders of magnitude over the last ten thousand years, and in the last two centuries per capita income, standards of living, and life expectancy have also risen. What we haven’t extracted, so far, is a black ball—a technology that invariably or by default destroys the civilization that invents it. The reason is not that we have been particularly careful or wise in our technology policy. We have just been lucky.

That terrifying final claim is the focus of the rest of the paper. 

A hard look at the history of nuclear weapon development

One might think it unfair to say “we have just been lucky” that no technology we’ve invented has had destructive consequences we didn’t anticipate. After all, we’ve also been careful, and tried to calculate the potential risks of things like nuclear tests before we conducted them.

Bostrom, looking at the history of nuclear weapons development, concludes we weren’t careful enough.

In 1942, it occurred to Edward Teller, one of the Manhattan scientists, that a nuclear explosion would create a temperature unprecedented in Earth’s history, producing conditions similar to those in the center of the sun, and that this could conceivably trigger a self-sustaining thermonuclear reaction in the surrounding air or water. The importance of Teller’s concern was immediately recognized by Robert Oppenheimer, the head of the Los Alamos lab. Oppenheimer notified his superior and ordered further calculations to investigate the possibility. These calculations indicated that atmospheric ignition would not occur. This prediction was confirmed in 1945 by the Trinity test, which involved the detonation of the world’s first nuclear explosive.

That might sound like a reassuring story — we considered the possibility, did a calculation, concluded we didn’t need to worry, and went ahead.

The report that Robert Oppenheimer commissioned, though, sounds fairly shaky, for something that was used as reason to proceed with a dangerous new experiment. It ends: “One may conclude that the arguments of this paper make it unreasonable to expect that the N + N reaction could propagate. An unlimited propagation is even less likely. However, the complexity of the argument and the absence of satisfactory experimental foundation makes further work on the subject highly desirable.” That was our state of understanding of the risk of atmospheric ignition when we proceeded with the first nuclear test.

A few years later, we badly miscalculated in a different risk assessment about nuclear weapons. Bostrom writes:

In 1954, the U.S. carried out another nuclear test, the Castle Bravo test, which was planned as a secret experiment with an early lithium-based thermonuclear bomb design. Lithium, like uranium, has two important isotopes: lithium-6 and lithium-7. Ahead of the test, the nuclear scientists calculated the yield to be 6 megatons (with an uncertainty range of 4-8 megatons). They assumed that only the lithium-6 would contribute to the reaction, but they were wrong. The lithium-7 contributed more energy than the lithium-6, and the bomb detonated with a yield of 15 megaton—more than double of what they had calculated (and equivalent to about 1,000 Hiroshimas). The unexpectedly powerful blast destroyed much of the test equipment. Radioactive fallout poisoned the inhabitants of downwind islands and the crew of a Japanese fishing boat, causing an international incident.

Bostrom concludes that “we may regard it as lucky that it was the Castle Bravo calculation that was incorrect, and not the calculation of whether the Trinity test would ignite the atmosphere.”

Nuclear reactions happen not to ignite the atmosphere. But Bostrom believes that we weren’t sufficiently careful, in advance of the first tests, to be totally certain of this. There were big holes in our understanding of how nuclear weapons worked when we rushed to first test them. It could be that the next time we deploy a new, powerful technology, with big holes in our understanding of how it works, we won’t be so lucky.

Destructive technologies up to this point have been extremely complex. Future ones could be simple.

We haven’t done a great job of managing nuclear nonproliferation . But most countries still don’t have nuclear weapons — and no individuals do — because of how nuclear weapons must be developed. Building nuclear weapons takes years, costs billions of dollars, and requires the expertise of top scientists. As a result, it’s possible to tell when a country is pursuing nuclear weapons.

Bostrom invites us to imagine how things would have gone if nuclear weaponry had required abundant elements, rather than rare ones.

Investigations showed that making an atomic weapon requires several kilograms of plutonium or highly enriched uranium, both of which are very difficult and expensive to produce. However, suppose it had turned out otherwise: that there had been some really easy way to unleash the energy of the atom—say, by sending an electric current through a metal object placed between two sheets of glass.

In that case, the weapon would proliferate as quickly as the knowledge that it was possible. We might react by trying to ban the study of nuclear physics, but it’s hard to ban a whole field of knowledge and it’s not clear the political will would materialize. It’d be even harder to try to ban glass or electric circuitry — probably impossible.

In some respects, we were remarkably fortunate with nuclear weapons. The fact that they rely on extremely rare materials and are so complex and expensive to build makes it far more tractable to keep them from being used than it would be if the materials for them had happened to be abundant.

If future technological discoveries — not in nuclear physics, which we now understand very well, but in other less-understood, speculative fields — are easier to build, Bostrom warns, they may proliferate widely.

Would some people use weapons of mass destruction, if they could?

We might think that the existence of simple destructive weapons shouldn’t, in itself, be enough to worry us. Most people don’t engage in acts of terroristic violence, even though technically it wouldn’t be very hard. Similarly, most people would never use dangerous technologies even if they could be assembled in their garage.

Bostrom observes, though, that it doesn’t take very many people who would act destructively. Even if only one in a million people were interested in using an invention violently, that could lead to disaster. And he argues that there will be at least some such people: “Given the diversity of human character and circumstance, for any ever so imprudent, immoral, or self-defeating action, there is some residual fraction of humans who would choose to take that action.”

That means, he argues, that anything as destructive as a nuclear weapon, and straightforward enough that most people can build it with widely available technology, will almost certainly be repeatedly used, anywhere in the world.

These aren’t the only scenarios of interest. Bostrom also examines technologies that would drive nation-states to war. “A technology that ‘democratizes’ mass destruction is not the only kind of black ball that could be hoisted out of the urn. Another kind would be a technology that strongly incentivizes powerful actors to use their powers to cause mass destruction,” he writes.

Again, he looks to the history of nuclear war for examples. He argues that the most dangerous period in history was the period between the start of the nuclear arms race and the invention of second-strike capabilities such as nuclear submarines. With the introduction of second-strike capabilities, nuclear risk may have decreased.

It is widely believed among nuclear strategists that the development of a reasonably secure second-strike capability by both superpowers by the mid-1960s created the conditions for “strategic stability.” Prior to this period, American war plans reflected a much greater inclination, in any crisis situation, to launch a preemptive nuclear strike against the Soviet Union’s nuclear arsenal. The introduction of nuclear submarine-based ICBMs was thought to be particularly helpful for ensuring second-strike capabilities (and thus “mutually assured destruction”) since it was widely believed to be practically impossible for an aggressor to eliminate the adversary’s boomer [sic] fleet in the initial attack.

In this case, one technology brought us into a dangerous situation with great powers highly motivated to use their weapons. Another technology — the capacity to retaliate — brought us out of that terrible situation and into a stabler one. If nuclear submarines hadn’t developed, nuclear weapons might have been used in the past half-century or so.

The solutions for a vulnerable world are unappealing — and perhaps ineffective

Bostrom devotes the second half of the paper to examining our options for preserving stability if there turn out to be dangerous technologies ahead for us.

None of them are appealing.

Halting the progress of technology could save us from confronting any of these problems. Bostrom considers it and discards it as impossible — some countries or actors would continue their research, in secrecy if necessary, and the outrage and backlash associated with a ban on a field of science might draw more attention to the ban.

A limited variant, which Bostrom calls differential technological development, might be more workable: “Retard the development of dangerous and harmful technologies, especially ones that raise the level of existential risk; and accelerate the development of beneficial technologies, especially those that reduce the existential risks posed by nature or by other technologies.”

To the extent we can identify which technologies will be stabilizing (like nuclear submarines) and work to build them faster than building dangerous technologies (like nuclear weapons), we can manage some risks in that fashion. Despite the frightening tone and implications of the paper, Bostrom writes that “[the vulnerable world hypothesis] does not imply that civilization is doomed.” But differential technological development won’t manage every risk, and might fail to be sufficient for many categories of risk.

The other options Bostrom puts forward are less appealing.

If the criminal use of a destructive technology can kill millions of people, then crime prevention becomes essential — and total crime prevention would require a massive surveillance state. If international arms races are likely to be even more dangerous than the nuclear brinksmanship of the Cold War, Bostrom argues we might need a single global government with the power to enforce demands on member states.

For some vulnerabilities, he argues further, we might actually need both:

Extremely effective preventive policing would be required because individuals can engage in hard-to-regulate activities that must nevertheless be effectively regulated, and strong global governance would be required because states may have incentives not to effectively regulate those activities even if they have the capability to do so. In combination, however, ubiquitous-surveillance-powered preventive policing and effective global governance would be sufficient to stabilize most vulnerabilities, making it safe to continue scientific and technological development even if [the vulnerable world hypothesis] is true.

It’s here, where the conversation turns from philosophy to policy, that it seems to me Bostrom’s argument gets weaker.

While he’s aware of the abuses of power that such a universal surveillance state would make possible, his overall take on it is more optimistic than seems warranted; he writes, for example, “If the system works as advertised, many forms of crime could be nearly eliminated, with concomitant reductions in costs of policing, courts, prisons, and other security systems. It might also generate growth in many beneficial cultural practices that are currently inhibited by a lack of social trust.”

But it’s hard to imagine that universal surveillance would in fact produce universal and uniform law enforcement, especially in a country like the US. Surveillance wouldn’t solve prosecutorial discretion or the criminalization of things that shouldn’t be illegal in the first place. Most of the world’s population lives under governments without strong protections for political or religious freedom. Bostrom’s optimism here feels out of touch.

Furthermore, most countries in the world simply do not have the governance capacity to run a surveillance state, and it’s unclear that the U.S. or another superpower has the ability to impose such capacity externally (to say nothing of whether it would be desirable).

If the continued survival of humanity depended on successfully imposing worldwide surveillance, I would expect the effort to lead to disastrous unintended consequences — as efforts at “nation-building” historically have. Even in the places where such a system was successfully imposed, I would expect an overtaxed law enforcement apparatus that engaged in just as much, or more, selective enforcement as it engages in presently.

Economist Robin Hanson, responding to the paper , highlighted Bostrom’s optimism about global governance as a weak point, raising a number of objections. First, “It is fine for Bostrom to seek not-yet-appreciated upsides [of more governance], but we should also seek not-yet-appreciated downsides” — downsides like introducing a single point of failure and reducing healthy competition between political systems and ideas.

Second, Hanson writes, “I worry that ‘bad cases make bad law.’ Legal experts say it is bad to focus on extreme cases when changing law, and similarly it may  go badly  to focus on very unlikely but extreme-outcome scenarios when reasoning about future-related policy.”

Finally, “existing governance mechanisms do especially badly with extreme scenarios. The history of how the policy world responded  badly  to extreme nanotech scenarios is a case worth considering.”

Bostrom’s paper is stronger where it’s focused on the question of management of catastrophic risks than when it ventures into these issues. The policy questions about risk management are of such complexity that it’s impossible for the paper to do more than skim the subject.

But even though the paper wavers there, it’s overall a compelling — and scary — case that technological progress can make a civilization frighteningly vulnerable, and that it’d be an exceptionally challenging project to make such a world safe.

Sign up for the Future Perfect newsletter.  Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good

Most Popular

The hottest place on earth is cracking from the stress of extreme heat, the backlash against children’s youtuber ms rachel, explained, india just showed the world how to fight an authoritarian on the rise, trump’s felony conviction has hurt him in the polls, take a mental break with the newest vox crossword, today, explained.

Understand the world with a daily explainer plus the most compelling stories of the day.

More in Future Perfect

Where AI predictions go wrong

Where AI predictions go wrong

What if you could have a panic attack, but for joy?

What if you could have a panic attack, but for joy?

OpenAI insiders are demanding a “right to warn” the public 

OpenAI insiders are demanding a “right to warn” the public 

Can artists use their own deepfakes for good?

Can artists use their own deepfakes for good?

MDMA’s federal approval drama, briefly explained

MDMA’s federal approval drama, briefly explained

This is the world’s best investment

This is the world’s best investment

Where AI predictions go wrong

This is your kid on smartphones  Audio

World leaders neglected this crisis. Now genocide looms.

World leaders neglected this crisis. Now genocide looms.

Cities know how to improve traffic. They keep making the same colossal mistake.

Cities know how to improve traffic. They keep making the same colossal mistake.

India just showed the world how to fight an authoritarian on the rise

Why China is winning the EV war  Video

The messy discussion around Caitlin Clark, Chennedy Carter, and the WNBA, explained

The messy discussion around Caitlin Clark, Chennedy Carter, and the WNBA, explained

Advertisement

Supported by

How Could A.I. Destroy Humanity?

Researchers and industry leaders have warned that A.I. could pose an existential risk to humanity. But they’ve been light on the details.

  • Share full article

science and technology can be build and destroy humanity essay

By Cade Metz

Cade Metz has spent years covering the realities and myths of A.I.

Last month, hundreds of well-known people in the world of artificial intelligence signed an open letter warning that A.I. could one day destroy humanity.

“Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” the one-sentence statement said.

The letter was the latest in a series of ominous warnings about A.I. that have been notably light on details. Today’s A.I. systems cannot destroy humanity. Some of them can barely add and subtract. So why are the people who know the most about A.I. so worried?

The scary scenario.

One day, the tech industry’s Cassandras say, companies, governments or independent researchers could deploy powerful A.I. systems to handle everything from business to warfare. Those systems could do things that we do not want them to do. And if humans tried to interfere or shut them down, they could resist or even replicate themselves so they could keep operating.

“Today’s systems are not anywhere close to posing an existential risk,” said Yoshua Bengio, a professor and A.I. researcher at the University of Montreal. “But in one, two, five years? There is too much uncertainty. That is the issue. We are not sure this won’t pass some point where things get catastrophic.”

The worriers have often used a simple metaphor. If you ask a machine to create as many paper clips as possible, they say, it could get carried away and transform everything — including humanity — into paper clip factories.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit and  log into  your Times account, or  subscribe  for all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?  Log in .

Want all of The Times?  Subscribe .

Freud, a Pessimist that Trusted Science

The digital divide, openmind books, scientific anniversaries, mauve: the history of the colour that revolutionized the world, featured author, latest book, technological wild cards: existential risk and a changing humanity, a new era of risk.

In the early hours of September 26, 1983, Stanislav Petrov was on duty at a secret bunker outside Moscow. A lieutenant colonel in the Soviet Air Defense Forces, his job was to monitor the Soviet early warning system for nuclear attack. Tensions were high; earlier that month Soviet jets had shot down a Korean civilian airliner, an event US President Reagan had called “ a crime against humanity that must never be forgotten.” The KGB had sent out a flash message to its operatives to prepare for possible nuclear war.

BBVA, OPenMind. Technological Wild Cards: Existential Risk and a Changing Humanity. HÉIGEARTAIGH. Chris Jordan, Crushed Cars #2, Tacoma (2004) “Intolerable Beauty: Portraits of American Mass Consumption” Series , 44 x 62 cm.

Petrov’s system reported a US missile launch. He remained calm, suspecting a computer error. The system then reported a second, third, fourth, and fifth launch. Alarms screamed, and lights flashed. Petrov “had a funny feeling in my gut”1; why would the US start a nuclear war with only five missiles? Without any additional evidence available, he radioed in a false alarm.

Later, it emerged that sunlight glinting off clouds at an unusual angle had triggered the system.

This was not an isolated incident. Humanity came to the brink of large-scale nuclear war many times during the Cold War.2 Sometimes computer system failures were to blame, and human intuition saved the day. Sometimes human judgment was to blame, but cooler heads prevented thermonuclear war. Sometimes flocks of geese were enough to trigger the system. As late as 1995, a Norwegian weather rocket launch resulted in the nuclear briefcase being open in front of Russia’s President Yeltsin.

If each of these events represented a coin flip, in which a slightly different circumstance—a different officer in a different place, in a different frame of mind—could have resulted in nuclear war, then we have played some frightening odds in the last seventy-odd years. And we have been breathtakingly fortunate.

Existential Risk and a Changing Humanity

Humanity has already changed a lot over its lifetime as a species. While our biology is not drastically different than it was 70,000 years ago, the capabilities enabled by our scientific, technological, and sociocultural achievements have changed what it is to be human. Whether through the processes of agriculture, the invention of the steam engine, or the practices of storing and passing on knowledge and ideas, and working together effectively as large groups, we have dramatically augmented our biological abilities. We can lift heavier things than our biology allows, store and access more information than our brains can hold, and collectively solve problems that we could not individually.

The species will change even more over coming decades and centuries, as we develop the ability to modify our biology, extend our abilities through various forms of human-machine interaction, and continue the process of sociocultural innovation. The long-term future holds tremendous promise: continued progress may allow humanity to spread throughout a galaxy that to the best of our knowledge appears devoid of intelligent life. However, what we will be in the future may bear little resemblance to what we are now, both physically and in terms of capability. Our descendants may be augmented far beyond what we currently recognize as human.

This is reflected in the careful wording of Nick Bostrom’s definition of existential risk, the standard definition used in the field. An existential risk “is one that threatens the premature extinction of earth-originating intelligent life, or the permanent and drastic destruction of its potential for desirable future development.”3 Scholars in the field are less concerned about the form humanity may take in the long-term future, and more concerned that we avoid circumstances that might prevent our descendants—whatever form they may take—from having the opportunity to flourish. One way in which this could happen is if a cataclysmic event were to wipe out our species (and perhaps, with it, the capacity for our planet to bear intelligent life in future). But another way would be if a cataclysm fell short of human extinction, but changed our circumstances such that further progress became impossible. For example, runaway climate change might not eliminate all of us, but might leave so few of us, scattered at the poles, and so limited in terms of accessible resources, that further scientific, technological, and cultural progress might become impossible. Instead of spreading to the stars, we might remain locked in a perennial battle for survival in a much less bountiful world.

The Risks We Have Always Faced

For the first 200,000 years of humanity’s history, the risks that have threatened our species as a whole have remained relatively constant. Indonesia’s crater lake Toba is the result of a catastrophic volcanic super-eruption that occurred 75,000 years ago, blasting an estimated 2800 cubic kilometers of material into the atmosphere. An erupted mass just 1/100th of this from the Tambora eruption (the largest in recent history) was enough to cause the 1816 “year without a summer,” where interference with crop yields caused mass food shortages across the northern hemisphere. Some lines of evidence suggest that the Toba event may have wiped out a large majority of the human population at the time, although this is debated. At the Chixculub Crater in Mexico, geologists uncovered the scars of the meteor that most likely wiped out seventy-five percent of species on earth at that time, including the dinosaurs, sixty-six million years ago. This may have opened the door, in terms of available niches, for the emergence of mammalian species and ultimately humanity.

Reaching further into the earth’s history uncovers other, even more cataclysmic events for previous species. The Permian-Triassic extinction event wiped out 90–96% of species at the time. Possible causes include meteor impacts, rapid climate change possibly due to increased methane release, large-scale volcanic activity, or a combination of these. Even further back, the cyanobacteria that introduced oxygen to our atmosphere, and paved the way for oxygen-breathing life, did so at a cost: they brought about the extinction of nearly all life at the time, to whom oxygen was poisonous, and triggered a “snowball earth” ice age.

The threats posed by meteor or asteroid impacts and supervolcanoes have not gone away. In principle an asteroid could hit us at any point with little warning. A number of geological hotspots could trigger a volcanic eruption; most famously, the Yellowstone Hotspot is believed to be “due” for another massive explosive eruption.

However, on the timescale of human civilization, these risks are very unlikely in the coming century, or indeed any given century. 660,000 centuries have passed since the event that wiped out the dinosaurs; the chances that the next such event will happen in our lifetimes is likely to be of the order of one in a million. And “due around now” for Yellowstone means that geologists expect such an event at some point in the next 20,000–40,000 years. Furthermore, these threats are static; there is little evidence that their probabilities, characteristics, or modes of impact are changing significantly on a human civilizational timescale.

New Challenges

New challenges have emerged alongside our civilizational progress. As we organized ourselves into larger groups and cities, it became easier for disease to spread among us. During the Middle Ages the Black Death outbreaks wiped out 30–60% of Europe’s population. And our travel across the globe allowed us to bring diseases with us to places they would never have otherwise reached; following European colonization of the Americas, disease outbreaks wiped out up to 95% native populations.

The Industrial Revolution allowed huge changes in our capabilities as a species. It allowed rapid progress in scientific knowledge, engineering, and manufacturing capability. It allowed us to draw heavily from cheap, powerful, and rapidly available energy sources—fossil fuels. It helped us to support a much greater global population. The global population more than doubled between 1700 and 1850, and population in England—birthplace of the Industrial Revolution—increased from 5 to 15 million in the same period, and doubled again to 30 million by 1900.4 In effect, these new technological capabilities allowed us to extract more resources, create much greater changes to our environment, and support more of us than had ever previously been possible. This is a path we have been accelerating along ever since then, with greater globalization, further scientific and technological development, a rising global population, and, in developed nations at least, a rising quality of life and resource use footprint.

On July 16, 1945, the day of the Trinity atomic bomb test, another milestone was reached. Humans had developed a weapon that could plausibly change the global environment in such an extreme way as to threaten the continued existence of the human species.

BBVA, OpenMind. Technological Wild Cards: Existential Risk and a Changing Humanity. HÉIGEARTAIGH. Yellowstone National Park (Wyoming, USA) is home to one of the planet’s hot spots, where a massive volcanic explosion could someday occur.

Yellowstone National Park (Wyoming, USA) is home to one of the planet’s hot spots, where a massive volcanic explosion could someday occur.

Power, Coordination, and Complexity

Humanity now has a far greater power to shape its environment, locally and globally, than any species that has existed to our knowledge; more so even than the cyanobacteria that turned this into a planet of oxygen-breathing life. We have repurposed huge swathes of the world’s land to our purposes—as fields to produce food for us, cities to house billions of us, roads to ease our transport, mines to provide our material resources, and landfill to house our waste. We have developed structures and tools such as air conditioning and heating that allow us to populate nearly every habitat on earth, the supply networks needed to maintain us across these locations, scientific breakthroughs such as antibiotics, and practices such as sanitation and pest control to defend ourselves from the pathogens and pests of our environments. We also modify ourselves to be better adapted to our environments, for example through the use of vaccines.

This increased power over ourselves and our environment, combined with methods to network and coordinate our activities over large numbers and wide areas, has created great resilience against many threats we face. In most of the developed world we can guarantee adequate food and water access for the large majority of the population, given normal fluctuations in yield; our food sources are varied in type and geographical location, and many countries maintain food stockpiles. Similarly, electricity grids provide a stable source of energy for developed populations, given normal fluctuations in supply. We have adequate hygiene systems and access to medical services, given normal fluctuations in disease burden, and so forth. Furthermore, we have sufficient societal stability and resources that we can support many brilliant people to work on solutions to emerging problems, or to advance our sciences and technologies to give us ever-greater tools to shape our environments, increase our quality of life, and solve our future problems.

It goes without saying that these privileges exist to a far lesser degree in developing nations, and that many of these privileges depend on often exploitative relationships with developing nations, but this is outside the scope of this chapter. Here the focus is on the resilience or vulnerability of the human-as-species, which is tied more closely to the resilience of the best-off than the vulnerability of the poorest, except to the extent that catastrophes affecting the world’s most vulnerable populations would certainly impact the resilience of less vulnerable populations.

Many of the tools, networks, and processes that make us more resilient and efficient in “normal” circumstances, however, may make us more vulnerable in the face of extreme circumstances. While a moderate disruption (for example, a reduced local crop yield) can be absorbed by a network, and compensated for, a catastrophic disruption may overwhelm the entire system, and cascade into linked systems in unpredictable ways. Systems critical for human flourishing, such as food, energy, and water, are inextricably interlinked (the “food-water-energy nexus”) and a disruption in one is near-guaranteed to impact the stability of the others. Further, these affect and are affected by many other human- and human-affected processes: our physical, communications, and electronic infrastructure, political stability (wars tend to both precede and follow famines), financial systems, and extreme weather (increasingly a human-affected phenomenon). These interactions are very dynamic and difficult to predict. Should the water supply from the Himalayas dry up one year, we have very little idea of the full extent of the regional and global impact, although we could reasonably speculate about droughts, major crop failures, and mass starvation, financial crises, a massive immigration crisis, regional warfare that could go nuclear and escalate internationally, and so forth. Although unlikely, it is not outside the bounds of imagination that through a series of unfortunate events, a catastrophe might escalate to one that would threaten the collapse of global civilization.

Two factors stand out.

Firstly, the processes underpinning our planet’s health are interlinked in all sorts of complex ways, and our activities are serving to increase the level of complexity, interlinkage, and unpredictability—particularly in the case of extreme events.

Secondly, the fact is that, despite our various coordinated processes, we as a species are very limited in our ability to act as a globally coordinated entity, capable of taking the most rational actions in the interests of the whole—or in the best interests of our continued survival and flourishing.

This second factor manifests itself in global inequality, which benefits developed nations in some ways, but also introduces major global vulnerabilities; the droughts, famines, floods, and mass displacement of populations likely to result from the impacts of climate change in the developing world are sure to negatively affect even the richest nations. It manifests itself in an inability to act optimally in the face of many of our biggest challenges. More effective coordination on action, communication, and resource distribution would make us more resilient in the face of pandemic outbreaks, as illustrated so vividly by the Ebola outbreak of 2014; a relatively mild outbreak of what should be an easily controllable disease served to highlight how inadequate pandemic preparedness and response was.5, 6 We were lucky that the disease was not one with greater pandemic potential, such as one capable of airborne transmission and with long incubation times.

Our limited ability to coordinate in our long-term interest manifests itself in a difficulty in limiting our global resource use, limiting the impact of our collective activities on our global habitat, and of investing our resources optimally for our long-term survival and well-being. And it limits our ability to guarantee that advances in science and technology be applied to furthering our well-being and resilience, as opposed to being destabilizing or even used for catastrophically hostile purposes, such as in the case of nuclear weapons.

Collective action problems are as old as humanity,7 and we have made significant progress in designing effective institutions, particularly in the aftermath of World War I and II. However, the stakes related to these problems become far greater as our power to influence our environment grows—through sheer force of numbers and distribution across the planet, and through more powerful scientific and technological tools with which to achieve our myriad aims or to frustrate those of our fellows. We are entering an era in which our greatest risks are overwhelmingly likely to be caused by our own activities, and our own lack of capacity to collectively steer and limit our power.

Our Footprint on the Earth

Population and resource use.

The United Nations estimated the earth’s population at 7.4 billion as of March 2016, up from 6.1 billion in 2000, 2.5 billion in 1950, and 1.6 billion in 1900. Long-term growth is difficult to predict (being affected by many uncertain variables such as social norms, disease, and the occurrence of catastrophes) and thus varies widely between studies. However, UN projections currently point to a steady increase through the twenty-first century, albeit at a slower growth rate, reaching just shy of 11 billion in 2100.8 Most estimates indicate global population will eventually peak and then fall, although the point at which this will happen is very uncertain. Current estimates of resource use footprints indicate that the global population is using fifty percent more resources per year than the planet can replenish. This is likely to continue rising sharply; more quickly than the overall population. If the average person used as many resources as the average American, some estimates indicate the global population would be using resources at four times the rate that they can be replenished. The vast majority of the population does not use food, energy, and water, nor release CO2 at the rate of the average American. However, the rapid rise of a large middle class in China is beginning to result in much greater resource use and CO2 output in this region, and the same phenomenon is projected to occur a little later on in India.

Catastrophic Climate Change

Without a significant change of course on CO2 emissions, the world is on course for significant human-driven global warming; according to the latest IPCC report, an increase of 2.5 to 7.8 °C can be expected under “business as usual” assumptions. The lower end of this scale will have significant negative repercussions for developing nations in particular but is unlikely to constitute a global catastrophe; however, the upper end of the scale would certainly have global catastrophic consequences. The wide range in part reflects significant uncertainty over how robust the climate system will be to the “forcing” effect of our activities. In particular, scientists focused on catastrophic climate change worry about a myriad of possible feedback loops. For example, a reduction of snow cover, which reflects the sun’s heat, could increase the rate of warming resulting in greater loss of snow cover. The loss of arctic permafrost might result in the release of large amounts of methane in the atmosphere, which would accelerate the greenhouse effect further. The extent to which oceans can continue to act as both “heat sinks” and “carbon sinks” as we push the concentration of CO2 in the atmosphere upward is unknown. Scientists theorize the existence of “tipping points,” which, once reached, might trigger an irreversible shift—for example, the collapse of the West Antarctic ice sheets or the melt of Greenland’s huge glaciers, or the collapse of the capacity for oceans to absorb heat and sequester CO2. In effect, beyond a certain point, a “rollercoaster” process may be triggered, where 3 degrees of temperature rise rapidly and irreversibly may lead to 4 degrees, and then 5.

Laudable progress has been made on achieving global coordination around the goal of reducing global carbon emissions, most notably in the aftermath of the December 2015 United Nations Climate Change Conference. 174 countries signed an agreement to reach zero net anthropogenic greenhouse gas emissions by the second half of the twenty-first century, and to “pursue efforts to limit” the temperature increase to 1.5 °C. But many experts hold that these goals are unrealistic, and that the commitments and actions being taken fall far short of what will be needed. According to the International Energy Agency’s Executive Director Fatih Birol: “We think we are lagging behind strongly in key technologies, and in the absence of a strong government push, those technologies will never be deployed into energy markets, and the chances of reaching the two-degree goal are very slim.”9

Soil Erosion

Soil erosion is a natural process. However human activity has increased the global rate dramatically, with deforestation, drought, and climate change accelerating the rate of loss of fertile soil. There are reasons to expect this trend to accelerate; some of the most powerful drivers of soil erosion are extreme weather events, and these events are expected to increase dramatically in frequency and severity as a result of climate change.

Biodiversity Loss

The world is entering an era of dramatic species extinction driven by human activity.10 Since 1900, vertebrate species have been disappearing at more than 100 times the rate seen in non-extinction periods. In addition to the intrinsic value of the diversity of forms of life on earth (the only life-inhabited planet currently known to exist in the universe), catastrophic risk scholars worry about the consequences for human societies. Ecosystem resilience is a tremendously complex phenomenon, and it seems plausible that tipping points exist in them. For example, the collapse of one or more keystone species underpinning the stability of an ecosystem could result in a broader ecosystem collapse with potentially devastating consequences for human system stability (for example, should key pollinator species disappear, the consequences for agriculture could be profound). Current human flourishing relies heavily on these ecosystem services, but we are threatening them at an unprecedented rate, and we have a poor ability to predict the consequences of our activity.

Everything Affects Everything Else

Once again, the sheer complexity and interconnectedness of these risks represents a key challenge. None of these processes happen in isolation, and developments in one affect the others. Climate change affects ecosystems by forcing species migration (for those that can), a change in plant and animal patterns of growth and behavior, and by driving species extinction. Reductions in available soil force us to drive more deeply into nonagricultural wilderness to provide the arable land we need to feed our populations. And the ecosystems we threaten play important roles in maintaining a stable climate and environment. Recognizing that we cannot get all the answers we need on these issues by studying them in isolation, threats posed by the interplay of these phenomena are a key area of study for catastrophic risk scholars.

All these developments result in a world with greater uncertainty, the emergence of huge and unpredictable new vulnerabilities, and more extreme and unprecedented events. These events will play out in a crowded world that contains more powerful technologies, and more powerful weapons, than have ever existed before.

Humanity and Technology in the Twenty-First Century

Our progress in science and technology, and related civilizational advances, have allowed us to house far more people on this planet, and have provided the power for those people to influence their environment more than any previous species. This progress is not of itself a bad thing, nor is the size of our global population.

There are good reasons to think that with careful planning, this planet should be able to house seven billion or more people stably and comfortably.11 With sustainable agricultural practices and innovative use of irrigation methods, it should be possible for many relatively uninhabited and agriculturally unproductive parts of the world to support more people and food production. An endless population growth on a finite planet is not possible without a collapse; however, growth until the point of collapse is by no means inevitable. Stabilization of population size is strongly correlated with several factors we are making steady global progress on: including education (especially of women), and rights and a greater level of control for women over their own lives. While there are conflicting studies,12 many experts hold that decreasing child mortality, while leading to population increase in the near-term, leads to a drop in population growth in the longer term. In other words, as we move toward a better world, we will bring about a more stable world, provided intermediate stages in this process do not trigger a collapse or lasting global harm.13, 14

Current advances in science and technology, while not sufficient in themselves, will play a key role in making a more resilient and sustainable future possible. Rapid progress is happening in carbon-zero energy sources such as solar photovoltaics and other renewables.15 Energy storage remains a problem, but progress is occurring on battery efficiency. Advances in irrigation techniques and desalination technologies may allow us to provide water to areas where this has not previously been possible, allowing both food production and other processes that depend on reliable access to clean water. Advances in materials technology will have wide-ranging benefits, from lighter, more energy-efficient vehicles, to more efficient buildings and energy grids, to more powerful scientific tools and novel technological innovations. Advances in our understanding of the genetics of plants are leading to crops with greater yields, greater resilience to temperature shifts, droughts and other extreme weather, and greater resistance to pests—resulting in a reduction of the need for polluting pesticides. We are likely to see many further innovations in food production; for example, exciting advances in lab-grown meat may result in the production of meat with a fraction of the environmental footprint of livestock farming.

Many of the processes that have resulted in our current unsustainable trajectories can be traced back to the Industrial Revolution, and our widespread adoption of fossil fuels. However, the Industrial Revolution and fossil fuels must also be recognized as having unlocked a level of prosperity, and a rate and scale of scientific and technological progress that would simply not have been possible without them. While a continued reliance on fossil fuels would be catastrophic for our environment, it is unclear whether many of the “clean technology” breakthroughs that will allow us to break our dependence on fossil fuels would have been possible without the scientific breakthroughs that were enabled directly, or indirectly, by this rich, abundant, and easily available fuel source. The goal is clear: having benefitted so tremendously from this “dirty” stage of technology, we now need to take advantage of the opportunity it gives us to move onto cleaner and more powerful next-generation energy and manufacturing technologies. The challenge will be to do so before thresholds of irreversible global consequence have been passed.

BBVA, OpenMind. Technological Wild Cards: Existential Risk and a Changing Humanity. HÉIGEARTAIGH. With 537 square meters of solar panels and six blocks of lithium-ion batteries, PlanetSolar is the world’s largest solar ship, as well as its fastest. It is also the first to have sailed round the world using exclusively solar power.

The broader challenge is that humanity as a species needs to transition to a stage of technological development and global cooperation where as a species we are “living within our means”: producing and using energy, water, food, and other resources at a sustainable rate, and by methods that will not impose long-term negative consequences on our global habitat—for at least as long as we are bound to it. There are no physical reasons to think that we might not be capable of developing an extensive space-faring civilization at a future point. And if we last that long, it is likely we will develop extensive abilities to terraform extraterrestrial environments to be hospitable to us—or indeed, transform ourselves to be suitable to currently inhospitable environments. However, at present, in Martin Rees’s words, there is no place in our Solar System nearly as hospitable as the most hostile environment on earth, and so we are bound to this fragile blue planet.

Part of this broader challenge is gaining a better understanding of the complex consequences of our actions, and more so, of the limits of our current understanding. Even if we cannot know everything, recognizing when our uncertainty may lead us into dangerous territory can help us figure out an appropriately cautious set of “safe operating parameters” (to borrow a phrase from Steffen et al.’s “Planetary Boundaries”16) for our activities. The second part of the challenge, perhaps harder still, is developing the level of global coordination and cooperation needed to stay within these safe operating parameters.

Technological Wild Cards

While much of the Centre for the Study of Existential Risk’s research focuses on these challenges—climate change, ecological risks, resource use, and population, and the interaction between these—the other half of our work is on another class of factors: transformative emerging and future technologies. We might consider these “wild cards”; technological developments significant enough to change the course of human civilization significantly in and of themselves. Nuclear weapons are such a wild card; their development changed the nature of geopolitics instantly and irreversibly. They also changed the nature of global risk: now many of the stressors we worry about might escalate quite quickly through human activity to a worst-case scenario involving a large-scale exchange of nuclear missiles. The scenario of most concern from an existential risk standpoint is one that might trigger a nuclear winter: a level of destruction sufficient to send huge amounts of particulate matter into the atmosphere and cause a lengthy period of global darkness and cold. If such a period persisted for long enough, this would collapse global food production and could drive the human species to near- or full-extinction. There is disagreement among experts about the scale of nuclear exchange needed to trigger a nuclear winter, but it appears eminently plausible that the world’s remaining arsenals, if launched, might be sufficient.

Nuclear weapons could be considered a wild card in a different sense: the underlying science is one that enabled the development of nuclear power, a viable carbon-zero alternative to fossil fuels. This dual-use characteristic—that the underlying science and technology could be applied to both destructive purposes, and peaceful ones—is common to many of the emerging technologies that we are most interested in.

A few key sciences and technologies of focus for scholars in this field include, among others:

Topics within bioscience and bioengineering such as the manipulation and modification of certain viruses and bacteria, and the creation of organisms with novel characteristics and capabilities (genetic engineering and synthetic biology).

Geoengineering: a suite of proposed large-scale technological interventions that would aim to “engineer” our climate in an effort to slow or even reverse the most severe impacts of climate change.

Advances in artificial intelligence—in particular, those that relate to progress toward artificial general intelligence—AI systems capable of matching or surpassing human intellectual abilities across a broad range of domains and challenges.

Progress on these sciences are driven in great part by a recognition of their potential for improving our quality of life, or the role they could play in aiding us to combat existing or emerging global challenges. However, in and of themselves they may also pose large risks.

Virus Research

Despite advances in hygiene, vaccines, and other health technology, natural pandemic outbreaks remain among the most potent global threats we face; for example, the 1918 Spanish influenza outbreak killed more people than World War I. This threat is of particular concern in our increasingly crowded, interconnected world. Advances in virology research are likely to play a central role in better defenses against, and responses to, viruses with pandemic potential.

A particularly controversial area of research is “gain-of-function” virology research, which aims to modify existing viruses to give them different host transmissibility and other characteristics. Researchers engaged in such research may help identify strains with high pandemic potential, and develop vaccines and antiviral treatment. However, research with infectious agents runs the risk of accidental release from research facilities. There have been suspected releases of infectious agents from laboratory facilities. The 1977–78 Russian influenza outbreak is strongly suspected to have originated due to a laboratory release event,17 and in the UK, the 2007 foot-and-mouth outbreak may have originated in the Pirbright animal disease research facility.18 Research on live infectious agents is typically done in facilities with the highest biosafety containment procedures, but some experts maintain that the potential for release, while low, remains, and may outweigh the benefits in some cases.

Some worry that advances in some of the same underlying sciences may make the development of novel, targeted biological weapons more feasible. In 2001 a research group in Australia inadvertently engineered a variant of mousepox with high lethality to vaccinated mice.19 An accidental or deliberate release of a similarly modified virus infecting humans, or a species we depend heavily on, could have catastrophic consequences.

Similarly, synthetic biology may lead to a wide range of tremendous scientific benefits. The field aims to design and construct new biological parts, devices, and systems, and to comprehensively redesign living organisms to perform functions useful to us. This may result in synthetic bacterial and plant “microfactories,” designed to produce new medicines, materials, and fuels, to break down waste, to act as sensors, and much more. In principle, such biofactories could be designed with much greater precision than current genetic modification and biolytic approaches. They should also allow products to be produced cheaply and cleanly. Such advances would be transformative on many challenges we currently face, such as global health care, energy, and fabrication.

Moreover, as the tools and facilities needed to engage in the science of synthetic biology become cheaper, a growing “citizen science” community is emerging around synthetic biology. Community “DIY Bio” facilities allow people to engage in novel experiments and art projects; some hobbyists even engage in synthetic biology projects in their own homes. Many of the leaders in the field are committed to synthetic biology being as open and accessible as possible worldwide, with scientific tools and expertise available freely. Competitions such as iGEM (International Genetically Engineered Machine) encourage undergraduate student teams to build and test biological systems in living cells, often with a focus on applying the science to important real-world challenges, and also to archive their results and products so as to make them available to future teams to build on.

Such citizen science represents a wonderful way of making cutting-edge science accessible and exciting to generations of innovators. However, the increasing ease of access to increasingly powerful tools is a cause of concern to the risk community. Even if the vast majority engaging in synthetic biology are both responsible and well intentioned, the possibility of bad actors or unintended consequences (such as the release of an organism with unintended ecological consequences) exists. Further, we may expect that the range and severity of negative consequences will increase, as well as the difficulty in tracking those who have access to the necessary tools and expertise. At present, biosafety and biosecurity is deeply embedded within the major synthetic biology initiatives. In the United States, the FBI works closely with synthetic biology centers, and leaders in the field espouse the need for good practices at every level. However, this area will progress rapidly, and a balance will need to be struck between allowing access to powerful tools to a wide number of people who can do good with them, while restricting the potential for accidents or deliberate misuse. It remains to be seen how easy it will be to achieve this.

Geoengineering represents a host of challenges. Stratospheric aerosol geoengineering represents a particularly powerful proposal: here, a steady stream of reflective aerosols would be released into the upper atmosphere in order to reduce the amount of the sun’s light reaching the earth’s surface globally. This effectively mimics the global cooling phenomenon that occurs after a large volcanic eruption, when particulate matter is blasted into the atmosphere. However, current work is focused on theoretical modelling, with very minimal practical field tests carried out to date. Questions remain about how practically feasible it would be to achieve this on a global scale, and what impact it would have on rainfall patterns and crop growth.

It should be highlighted that this is not a solution to climate change. While global temperature might be stabilized or lowered, unless this was accompanied by reduction of CO2 emissions, then a host of damages such as ocean acidification would still occur. Furthermore, if CO2 emissions were allowed to continue to rise during this period, then a major risk termed “termination shock” could manifest. In this case, if any circumstance resulted in an abrupt cessation of stratospheric aerosol geoengineering, then the increased CO2 concentration in the atmosphere would result in a rapid jump in global temperature, which would have far more severe impacts on ecosystems and human societies than the already disastrous effects of a gradual rise.

Critics fear that such research might be misunderstood as a way of avoiding the far more costly process of eliminating carbon emissions; and some are concerned that intervening in such a profound way in our planet’s functioning is deeply irresponsible. It also raises knotty questions about global governance: should any one country have the right to engage in geoengineering, and, if not, how could a globally coordinated decision be reached, particularly if different nations have different exposures to the impacts of climate change, and different levels of concern about geoengineering, given we are all under the same sky?

Proponents highlight that we may already be committed to severe global impacts from climate change at this stage, and that such techniques may allow us the necessary breathing room needed to transition to zero-carbon technology while temporarily mitigating the worst of the harms. Furthermore, unless research is carried out to assess the feasibility and likely impacts of this approach, we will not be well placed to make an informed decision at a future date, when the impacts of climate change may necessitate extreme measures. Eli Kintisch, a writer at Science, has famously called geoengineering “a bad idea whose time has come.”20

Artificial intelligence, explored in detail in Stuart Russell’s chapter, may represent the wildest card of all. Everything we have achieved in terms of our civilizational progress, and shaping the world around us to our purposes, has been a product of our intelligence. However, some of the intellectual challenges we face in the twenty-first century are ones that human intelligence alone is not best suited to: for example, sifting through and identifying patterns in huge amounts of data, and integrating information from vast and interlinked systems. From analyzing disparate sources of climate data, to millions of human genomes, to running thousands of simulations, artificial intelligence will aid our ability to make use of the huge amount of knowledge we can gather and generate, and will help us make sense of our increasingly complex, interconnected world. Already, AI is being used to optimize energy use across Google’s servers, replicate intricate physics experiments, and discover new mathematical proofs. Many specific tasks traditionally requiring human intelligence, from language translation to driving on busy roads, are now becoming automatable; allowing greater efficiency and productivity, and freeing up human intelligence for the tasks that AI still cannot do. However, many of the same advances have more worrying applications; for example, allowing collection and deep analysis of data on us as individuals, and paving the road for the development of cheap, powerful, and easily scalable autonomous weapons for the battlefield.

These advances are already having a dramatic impact on our world. However, the vast majority of these systems can be described as “narrow” AI. They can perform functions at human level or above in narrow, well-specified domains, but lack the general cognitive abilities that humans, dogs, or even rats have: general problem-solving ability in a “real-world” setting, an ability to learn from experience and apply knowledge to new situations, and so forth.

There is renewed enthusiasm for the challenge of achieving “general” AI, or AGI, which would be able to perform at human level or above across the range of environments and cognitive challenges that humans can. However, it is currently unknown how far we are from such a scientific breakthrough, or how difficult the fundamental challenges to achieving this will be, and expert opinion varies widely. Our only proof of principle is the human brain, and it will take decades of progress before we can meaningfully understand the brain to a degree that would allow us to replicate its key functions. However, if and when such a breakthrough is achieved, there is reason to think that progress from human-level general intelligence to superintelligent AGI might be achieved quite rapidly.

Improvements in the hardware and software components of AI, and related sciences and technologies, might be made rapidly with the aid of advanced general AI. It is even conceivable that AI systems might directly engage in high-level AI research, in effect accelerating the process by allowing cycles of self-improvement. A growing number of experts in AI are concerned that such a process might quickly result in extremely powerful systems beyond human control; Stuart Russell has drawn a comparison with nuclear chain reaction.

Superintelligent AI has the potential to unlock unprecedented progress on science, technology, and global challenges; to paraphrase the founders of Google DeepMind, if intelligence can be “solved,” it can then be used to help solve everything else. However, the risk from this hypothetical technology, whether through deliberate use or unintended runaway consequences, could be greater than that of any technology in human history. If it is plausible that this technology might be achieved in this century, then a great deal of research and planning—both on the technical design of such systems, and the governance structures around their development—will be needed in the decades beforehand in order to achieve a desirable transition.

Predicting the Future

The field also engages in exploratory and foresight-based work on more forward-looking topics; these include future advances in neuroscience and nanotechnology, future physics experiments, and proposed manufacturing technologies that may be developed in coming decades, such as molecular manufacturing. While we are limited in what we can say in detail about future scientific breakthroughs, it is often possible to establish some useful groundwork. For example, we can identify developments that should, in principle, be possible based on our current understanding of the relevant science. And we can dismiss ideas that are pure “science fiction,” or sufficiently unfeasible to be safely ignored for now, or that represent a level of progress that makes them unlikely to be achieved for many generations.

By focusing further on those that could plausibly be developed within the next half century, we can give considerations to their underlying characteristics and possible impacts on the world, and of the broad principles we might bear in mind for their safe development and application. While it would have been a fool’s errand to try to predict the full impacts of the Internet prior to 1960, or of the development of nuclear weapons prior to 1945, it would certainly be possible to develop some thinking around the possible implications of very sophisticated global communications and information-sharing networks, or of a weapon of tremendous destructive potential.

Lastly, if we have some ideas about the directions from which transformative developments might come, we can engage in foresight and road-mapping research. This can help identify otherwise insignificant breakthroughs and developments that may indicate meaningful progress toward a more transformative technology being reached, or a threshold beyond which global dynamics are likely to shift significantly (such as photovoltaics and energy storage becoming cheaper and more easily accessible than fossil fuels).

Confronting the Limits of Our Knowledge

A common theme across these emerging technologies and emerging risks is that a tremendous level of scientific uncertainty and expert disagreement typically exists. This is particularly the case for future scientific progress and capabilities, the ways in which advances in one domain may influence progress in others, and the likely global impacts and risks of projected advances. Active topics of research at CSER include how to obtain useful information from a range of experts with differing views, and how to make meaningful scientific progress on challenges where we have discontinuous data, or few case studies to draw on, or even when we must characterize an entirely unprecedented event. This might be a hypothesized ecological tipping point, which when passed would result in an irreversible march toward the collapse of an entire critical ecosystem. Or it might be a transformative scientific breakthrough such as the development of artificial general intelligence, where we only have current trends in AI capability, hardware, and expert views on the key unsolved problems in the field to draw insight from. It is unrealistic to expect that we can always, or even for the most part, be right. We need to have humility, to expect false positives, and to be able to identify priority research targets from among many weak signals.

Recognizing that there are limits to the level of detail and certainty that can be achieved, this work is often combined with work on general principles of scientific and technological governance. For example, work under the heading of “responsible innovation” focuses on the challenge of developing collective stewardship of progress in science and technology in the present, with a view to achieving good future outcomes.21 This combines scientific foresight with processes to involve the key stakeholders at the appropriate stages of a technology’s development. At different stages these stakeholders will include: scientists involved in fundamental research and applied research; industry leaders; researchers working on the risks, benefits, and other impacts of a technology; funders; policymakers; regulators; NGOs and focus groups; and laypeople who will use or be affected by the development of a technology. In the case of technologies with a potential role in global catastrophic risk, the entire global population holds a stake. Therefore decisions with long-term consequences must not rest solely with a small group of people, represent only the values of a small subset of people, or fail to account for the likely impacts on the global population.

There have been a number of very encouraging specific examples of such foresight and collaboration, where scientific domain specialists, interdisciplinary experts, funders, and others have worked together to try to guide an emerging technology’s development, establish ethical norms and safety practices, and explore its potential uses and misuses in a scientifically rigorous way. In bioengineering, the famous 1975 Asilomar conference on recombinant DNA established important precedents, and more recently summits have been held on advances such as human gene editing. In artificial intelligence, a number of important conferences have been held recently, with enthusiastic participation from academic and industry research leaders in AI alongside interdisciplinary experts and policymakers. A number of the world’s leading AI research teams have established ethical advisory panels to inform and guide their scientific practices, and a cross-industry “partnership on AI to benefit people and society” involving five companies leading fundamental research has recently been announced.22

More broadly, it is crucial that we learn from the lessons of past technologies and, where possible, develop principles and methodologies that we can take forward. This may give us an advantage in preparing for developments that are currently beyond our horizon and that methodologies too deeply tied to specific technologies and risks may not allow. One of the key concerns associated with risks from emerging and future technologies is the rate at which progress occurs and at which the associated threats may arise. While every science will throw up specific challenges and require domain-specific techniques and expertise, any tools or methodologies that help us to intervene reliably earlier are to be welcomed. There may be a limited window of opportunity for averting such risks. Indeed, this window may occur in the early stages of developing a technology, well before the fully mature technology is out in the world, where it is difficult to control. Once Pandora’s box is open, it is very difficult to close.

Working on the (Doomsday) Clock

Technological progress now offers us a vision of a remarkable future. The advances that have brought us onto an unsustainable pathway have also raised the quality of life dramatically for many, and have unlocked scientific directions that can lead us to a safer, cleaner, more sustainable world. With the right developments and applications of technology, in concert with advances in social, democratic, and distributional processes globally, progress can be made on all of the challenges discussed here. Advances in renewable energy and related technologies, and more efficient energy use—advances that are likely to be accelerated by progress in technologies such as artificial intelligence—can bring us to a point of zero-carbon emissions. New manufacturing capabilities provided by synthetic biology may provide cleaner ways of producing products and degrading waste. A greater scientific understanding of our natural world and the ecosystem services on which we rely will aid us in plotting a trajectory whereby critical environmental systems are maintained while allowing human flourishing. Even advances in education and women’s rights globally, which will play a role in achieving a stable global population, can be aided specifically by the information, coordination, and education tools that technology provides, and more generally by growing prosperity in the relevant parts of the world.

There are catastrophic and existential risks that we will simply not be able to overcome without advances in science and technology. These include possible pandemic outbreaks, whether natural or engineered. The early identification of incoming asteroids, and approaches to shift their path, is a topic of active research at NASA and elsewhere. While currently there are no known techniques to prevent or mitigate a supervolcanic eruption, this may not be the case with the tools at our disposal a century from now. And in the longer run, a civilization that has spread permanently beyond the earth, enabled by advances in spaceflight, manufacturing, robotics, and terraforming, is one that is much more likely to endure. However, the breathtaking power of the tools we are developing is not to be taken lightly. We have been very lucky to muddle through the advent of nuclear weapons without a global catastrophe. And within this century, it is realistic to expect that we will be able to rewrite much of biology to our purposes, intervene deliberately and in a large-scale way in the workings of our global climate, and even develop agents with intelligence that is fundamentally alien to ours, and may vastly surpass our own in some or even most domains—a development that would have uniquely unpredictable consequences.

It is reassuring to note that there are relatively few individual events that could cause an existential catastrophe—one resulting in extinction or a permanent civilizational collapse. Setting aside the very rare events (such as supervolcanoes and asteroids), the most plausible candidates include nuclear winter, extreme global warming or cooling scenarios, the accidental or deliberate release of an organism that radically altered the planet’s functioning, or the release of an engineered pathogen. They also include more speculative future advances: new types of weaponry, runaway artificial intelligence, or maybe physics experiments beyond what we can currently envisage. Many global risks are, in isolation, survivable—at least for some of us—and it is likely that human civilization could recover from them in the long run: less severe global warming, various environmental disasters and ecosystem collapses, widespread starvation, most pandemic outbreaks, conventional warfare (even global).

However, this latter class of risks, and factors that might drive them (such as population, resource use, and climate change) should not be ignored in the broader study of existential risk. Nor does it make sense to consider these challenges in isolation: in our interconnected world they all affect each other. The threat of global nuclear war has not gone away, and many scholars believe that it may be rising again (at the time of writing, North Korea has just undergone its most ambitious nuclear test to date). If climate pressures, drought, famine, and other resource pressures serve to escalate geopolitical tensions, or if the potential use of a new technology, such as geoengineering, could lead to a nuclear standoff, then the result is an existential threat.

For all these reasons and more, a growing community of scholars across the world believe that the twenty-first century will see greater change and greater challenges than any century in humanity’s past history. It will be a century of unprecedented global pressures, and a century in which extreme and unpredictable events are likely to happen more frequently than ever before in the past. It will also be a century in which the power of technologies unlike any we have had in our past history will hang over us like multiple Damocles’ swords. But it will also be a century in which the technologies we develop, and the institutional structures we develop, may aid us in solving many of the problems we currently face—if we guide their development, and their uses and applications, carefully.

It will be a century in which we as a species will need to learn to cooperate on a scale and depth that we have never done before, both to avoid the possibility of conflict with the weapons of cataclysmic power we have developed, and to avoid the harmful consequences of our combined activities on the planet. And despite how close we came to falling on the first hurdle with nuclear weapons, there are reasons for great optimism. The threat they presented has, indeed, led to a greater level of international cooperation, and international structures to avoid the possibility of large-scale war, than has ever happened before. In a world without nuclear weapons, we may, indeed, have seen a third world war by now. And the precedent set by international efforts around climate change mitigation is a powerful one. In December 2015, nations around the world agreed to take significant steps to reduce the likelihood of global catastrophic harm to future generations, even though in many cases these steps may be both against the individual economic interests of these nations, and against the economic interests of the current generation. With each of these steps, we learn more, and we put another plank in the scientific and institutional scaffolding we will need to respond effectively to the challenges to come.

If we get it right in this century, humanity will have a long future on earth and among the stars.

Acknowledgments

Seán Ó hÉigeartaigh’s work is supported by a grant from Templeton World Charity Foundation. The opinions expressed in this chapter are those of the author and do not necessarily reflect the views of Templeton World Charity Foundation.

1. M. Garber, “The man who saved the world by doing absolutely nothing,” The Atlantic (2013). http://www.theatlantic.com/technology/archive/2013/09/the-man-who-saved-the-world-by-doing-absolutely-nothing/280050.

2. P. M. Lewis et al., Too Close for Comfort: Cases of Near Nuclear Use and Options for Policy (London: Chatham House, the Royal Institute of International Affairs, 2014).

3. Nick Bostrom, “Existential risks,” Journal of Evolution and Technology 9(1) (2002).

4. J. Jefferies, “The UK population: past, present and future,” in Focus on People and Migration (London: Palgrave Macmillan UK, 2005).

5. B. Gates, “The next epidemic—lessons from Ebola,” New England Journal of Medicine 372(15) (2015).

6. Jeremy J. Farrar and Peter Piot, “The Ebola emergency—immediate action, ongoing strategy,” New England Journal of Medicine 371 (2014): 16.

7. See http://slatestarcodex.com/2014/07/30/meditations-on-moloch.

8. World Population Prospects: the 2015 Revision. United Nations Department of Economic and Social Affairs. https://esa.un.org/unpd/wpp/Publications/Files/Key_Findings_WPP_2015.pdf.

9. See https://www.technologyreview.com/s/601601/six-months-after-paris-accord-were-losing-the-climate-change-battle.

10. G. Ceballos et al., “Accelerated modern human-induced species losses: Entering the sixth mass extinction,” Science Advances 1(5) (2015).

11. Toby Ord, “Overpopulation or underpopulation,” in Is the Planet Full?, Ian Goldin (ed.), (Oxford: OUP, 2014).

12. J. D. Shelton, “Taking exception. Reduced mortality leads to population growth: an inconvenient truth,” Global Health: Science and Practice 2(2) (2014).

13. See https://www.givingwhatwecan.org/post/2015/09/development-population-growth-and-mortality-fertility-link.

14. Bill Gates, “2014 Gates annual letter: 3 myths that block progress for the poor,” Gates Foundation 14 (2014). http://www.gatesfoundation.org/Who-We-Are/Resources-and-Media/Annual-Letters-List/Annual-Letter-2014.

15. D. King et al., A Global Apollo Programme to Combat Climate Change (London: London School of Economics, 2015). http://www.globalapolloprogram.org/

16. W. Steffen et al., “Planetary boundaries: Guiding human development on a changing planet,” Science 347(6223) (2015)..

17. M. Rozo and G. K. Gronvall, “The reemergent 1977 H1N1 strain and the gain-of-function debate,” mBio 6(4) (2015).

18. T. Hugh Pennington, “Biosecurity 101: Pirbright’s lessons in laboratory security,” BioSocieties 2(04) (2007).

19. M. J. Selgelid et al., “The mousepox experience,” EMBO reports 11(1) (2010): 18–24.

20. See https://www.wired.com/2010/03/hacktheplanet-qa.

21. J. Stilgoe et al., “Developing a framework for responsible innovation,” Research Policy 42(9) (2013).

22. See http://www.partnershiponai.org.

Related publications

  • Innovation: it is Generally Agreed that Science Shapes Technology, but is that the Whole Story?
  • Futures Studies: Theories and Methods
  • Provably Beneficial Artificial Intelligence

Download Kindle

Download epub, download pdf, more publications related to this article, more about humanities, communications, comments on this publication.

Morbi facilisis elit non mi lacinia lacinia. Nunc eleifend aliquet ipsum, nec blandit augue tincidunt nec. Donec scelerisque feugiat lectus nec congue. Quisque tristique tortor vitae turpis euismod, vitae aliquam dolor pretium. Donec luctus posuere ex sit amet scelerisque. Etiam sed neque magna. Mauris non scelerisque lectus. Ut rutrum ex porta, tristique mi vitae, volutpat urna.

Sed in semper tellus, eu efficitur ante. Quisque felis orci, fermentum quis arcu nec, elementum malesuada magna. Nulla vitae finibus ipsum. Aenean vel sapien a magna faucibus tristique ac et ligula. Sed auctor orci metus, vitae egestas libero lacinia quis. Nulla lacus sapien, efficitur mollis nisi tempor, gravida tincidunt sapien. In massa dui, varius vitae iaculis a, dignissim non felis. Ut sagittis pulvinar nisi, at tincidunt metus venenatis a. Ut aliquam scelerisque interdum. Mauris iaculis purus in nulla consequat, sed fermentum sapien condimentum. Aliquam rutrum erat lectus, nec placerat nisl mollis id. Lorem ipsum dolor sit amet, consectetur adipiscing elit.

Nam nisl nisi, efficitur et sem in, molestie vulputate libero. Quisque quis mattis lorem. Nunc quis convallis diam, id tincidunt risus. Donec nisl odio, convallis vel porttitor sit amet, lobortis a ante. Cras dapibus porta nulla, at laoreet quam euismod vitae. Fusce sollicitudin massa magna, eu dignissim magna cursus id. Quisque vel nisl tempus, lobortis nisl a, ornare lacus. Donec ac interdum massa. Curabitur id diam luctus, mollis augue vel, interdum risus. Nam vitae tortor erat. Proin quis tincidunt lorem.

Hyperhistory, the Emergence of the MASs, and the Design of Infraethics

Do you want to stay up to date with our new publications.

Receive the OpenMind newsletter with all the latest contents published on our website

OpenMind Books

  • The Search for Alternatives to Fossil Fuels
  • View all books

About OpenMind

Connect with us.

  • Keep up to date with our newsletter

Quote this content

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

challenges-logo

Article Menu

  • Subscribe SciFeed
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

The scientist: creator and destroyer—“scientists’ warning to humanity” is a wake-up call for researchers.

science and technology can be build and destroy humanity essay

Graphical Abstract

1. Introduction

2. the scientist.

“It would be folly to argue that our knowledge is sufficient to allow any expert, in any realm of social importance, to claim finality for his outlook. He too often, also, fails to see his results in their proper perspective […] The expert, in fact, simply by reason of his immersion in a routine, tends to lack flexibility of mind once he approaches the margin of his special theme.” —Harold J. Laski, The Limitations of the Expert, 1931.

3. The Search for Truth

“Formerly, the pure scientist or the pure scholar had only one responsibility beyond those which everybody has; that is, to search for truth.” —Karl Popper, The Moral Responsibility of the Scientist, 1969.

4. Quantification and Rationality

“The treasure of empirical contemplation, collected through ages, is in no danger of experiencing any hostile agency from philosophy.” —Alexander von Humboldt, Cosmos, 1845.

5. Scientific Cultures

“The British school insisted that the ultimate source of all knowledge was observation, while the Continental school insisted that it was the intellectual intuition of clear and distinct ideas.” —Popper, the Sources of Knowledge and Ignorance, 1962.

6. Perceiving the Studied Subject

7. theoreticians and practitioners.

“Science cannot make progress without the action of two distinct classes of thinkers: the first consisting of men of creative genius, who strike out brilliant hypotheses, and who may be spoken of as ‘theorizers’ in the good sense of the word; the second, of men possessed of the critical faculty, and who test, mold into shape, perfect or destroy, the hypotheses thrown out by the former class.” —Mivart, The Essays, 1892.

8. Concluding Remarks

Acknowledgments, conflicts of interest.

  • Ripple, W.J.; Wolf, C.; Newsome, T.M.; Galetti, M.; Alamgir, M.; Crist, E.; Mahmoud, M.I.; Laurance, W.F. 15,364 scientist signatories from 184 countries. World scientists’ warning to humanity: A second notice. BioScience 2017 , 67 , 1026–1028. [ Google Scholar ] [ CrossRef ]
  • OECD. Global Material Resources Outlook to 2060: Economic Drivers and Environmental Consequences ; OECD Publishing: Paris, France, 2019. [ Google Scholar ]
  • Gee, D.; MacGarvin, M.; Stirling, A.; Keys, J.; Wynne, B.; Vaz, S.G. Late Lessons from Early Warnings: The Precautionary Principle 1896–2000 ; Harremoës, P., Ed.; Office for Official Publications of the European Communities: Luxembourg, 2001. [ Google Scholar ]
  • Popper, K.R. The moral responsibility of the scientist. Bull. Peace Propos. 1971 , 22 , 79–83. [ Google Scholar ]
  • Popper, K.R. Sources of knowledge and ignorance. In Introduction to Popper K. Conjectures and Refutations: The Growth of Scientific Knowledge ; Routledge: London, UK, 2014. [ Google Scholar ]
  • Epstein, S.S. Corporate crime: Why we cannot trust industry-derived safety studies. Int. J. Health Serv. 1990 , 20 , 443–458. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Goodstein, D. Scientific misconduct. Academe 2002 , 88 , 28. [ Google Scholar ]
  • Ioannidis, J.P. Why most published research findings are false. PLoS Med. 2005 , 2 , e124. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Flannery, T.F. The Weather Makers: How Man Is Changing the Climate and What Is Means for Life on Earth ; Atlantic Grove Press: New York, NY, USA, 2005. [ Google Scholar ]
  • Merton, R.K. Social structure and anomie. Am. Sociol. Rev. 1938 , 3 , 672–682. [ Google Scholar ] [ CrossRef ]
  • Fleck, L. Entstehung und Entwicklung einer Wissenschaftlichen Tatsache. Einführung in die Lehre vom Denkstil und Denkkollektiv ; Benno Schwabe und Co: Basel, Switzerland, 1935. [ Google Scholar ]
  • Evans, H. The SLI Effect: Street Lamp Interference: A Provisional Assessment. 1993. Available online: www.assap.ac.uk/newsite/Docs/sli.pdf (accessed on 28 July 2019).
  • Levengood, W.C.; Talbott, N.P. Dispersion of energies in worldwide crop formations. Physiol. Plant. 1999 , 105 , 615–624. [ Google Scholar ] [ CrossRef ]
  • Arrhenius, S. Worlds in the Making: The Evolution of the Universe ; Harper & brothers: New York, NY, USA; London, UK, 1908. [ Google Scholar ]
  • Mileikowsky, C.; Cucinotta, F.A.; Wilson, J.W.; Gladman, B.; Horneck, G.; Lindegren, L.; Melosh, J.; Rickman, H.; Valtonen, M.; Zheng, J.Q. Risks threatening viable transfer of microbes between bodies in our solar system. Planet. Space Sci. 2000 , 48 , 1107–1115. [ Google Scholar ]
  • Grebennikova, T.V.; Syroeshkin, A.V.; Shubralova, E.V.; Eliseeva, O.V.; Kostina, L.V.; Kulikova, N.Y.; Latyshev, O.E.; Morozova, M.A.; Yuzhakov, A.G.; Zlatskiy, I.A.; et al. The DNA of bacteria of the World Ocean and the Earth in cosmic dust at the International Space Station. Sci. World J. 2018 , 18 , 1–7. [ Google Scholar ]
  • Proctor, R.N. Agnotology: A missing term to describe the cultural production of ignorance (and its study). In Agnotology: The Making and Unmaking of Ignorance ; Proctor, R.N., Schiebinger, L., Eds.; Stanford University Press: Stanford, CA, USA, 2008; pp. 1–33. [ Google Scholar ]
  • Mivart, G.J. Essays and Criticism, Volume II ; Little Brown Co: Boston, MA, USA, 1892. [ Google Scholar ]
  • Popper, K.R.; Logik der, F. Zur Erkenntnistheorie der Modernen Naturwissenschaft ; Mohr Siebeck Verlag: Tübingen, Germany, 1934. [ Google Scholar ]
  • Popper, K.R. Conjenctures and Refutations: The Growth of Scientific Knowledge ; Basic Books: New York, NY, USA, 1962. [ Google Scholar ]
  • Wynne, B. Misunderstood misunderstanding: Social identities and public uptake of science. Public Underst. Sci. 2016 , 21 , 281–304. [ Google Scholar ]
  • Trevor-Roper, P. The World Through Blunted Sight: An Inquiry into the Influence of Defective Vision on Art and Character ; Thames and Hudson: London, UK, 1970. [ Google Scholar ]
  • Wynne, B. Knowledges in context. Sci. Technol. Hum. Values 1991 , 16 , 111–121. [ Google Scholar ] [ CrossRef ]
  • Saint-Exupéry, A.D., Translator; Katherine Woods. In The Little Prince ; Harcourt, Brace and Company Publishers: New York, NY, USA, 1943.
  • Cuhra, M. Evaluation of glyphosate resistance: Is the rhizosphere microbiome a key factor? J. Biol. Phys. Chem. 2018 , 18 , 78–93. [ Google Scholar ] [ CrossRef ]
  • Cuhra, M. Observations of water-flea Daphnia magna and avian fecalia in rock pools: Is traditional natural history reporting still relevant for science? J. Nat. Hist. 2019 , 53 , 315–334. [ Google Scholar ] [ CrossRef ]
  • Porter, S.B.; Buie, M.W.; Parker, A.H.; Spencer, J.R.; Benecchi, S.; Tanga, P.; Verbiscer, A.; Kavelaars, J.J.; Gwyn, S.D.; Young, E.F.; et al. High-precision orbit fitting and uncertainty analysis of (486958) 2014 MU69. Astron. J. 2018 , 156 , 20–38. [ Google Scholar ] [ CrossRef ]
  • Crane, L. Space rocks reveal surprises galore. New Sci. 2019 , 3223 , 8. [ Google Scholar ] [ CrossRef ]
  • Persons, W.S., IV; Currie, P.J.; Erickson, G.M. An older and exceptionally large adult specimen of Tyrannosaurus rex. Anat. Rec. 2019 , 23 , 1–17. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Williamson, D.I. Incongruous larvae and the origin of some invertebrate life-histories. Prog. Oceanogr. 1987 , 19 , 87–116. [ Google Scholar ] [ CrossRef ]
  • Williamson, D. Larvae and Evolution: Toward a New Zoology ; Chapman & Hall: London, UK; New York, NY, USA, 1992. [ Google Scholar ]
  • Editorial comment. Prog. Oceanogr. 1987 , 19 , I–II.
  • Kammerer. In the Inheritance of Acquired Characteristics ; Boni & Liveright Publishers: New York, NY, USA, 1924.
  • Koestler, A. The Case of the Midwife Toad ; Hutchinson: London, UK, 1971. [ Google Scholar ]
  • Sagan, L. On the origin of mitosing cells. J. Theor. Biol. 1967 , 14 , 225–278. [ Google Scholar ] [ CrossRef ]
  • Lazcano, A.; Peretó, J. On the origin of mitosing cells: A historical appraisal of Lynn Margulis endosymbiotic theory. J. Theor. Biol. 2017 , 434 , 80–87. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Laski, H.J. The Limitations of the Expert ; Fabian Society: London, UK, 1931. [ Google Scholar ]
  • Ware, M.; Mabe, M. The STM Report: An Overview of Scientific and Scholarly Journal Publishing ; International Association of Scientific, Technical and Medical Publishers: Hague, The Netherlands, 2005. [ Google Scholar ]
  • Goldbort, R.C. Scientific writing as an art and as a science. J. Environ. Health. 2001 , 63 , 22–24. [ Google Scholar ] [ PubMed ]

Share and Cite

Cuhra, M. The Scientist: Creator and Destroyer—“Scientists’ Warning to Humanity” Is a Wake-Up Call for Researchers. Challenges 2019 , 10 , 33. https://doi.org/10.3390/challe10020033

Cuhra M. The Scientist: Creator and Destroyer—“Scientists’ Warning to Humanity” Is a Wake-Up Call for Researchers. Challenges . 2019; 10(2):33. https://doi.org/10.3390/challe10020033

Cuhra, Marek. 2019. "The Scientist: Creator and Destroyer—“Scientists’ Warning to Humanity” Is a Wake-Up Call for Researchers" Challenges 10, no. 2: 33. https://doi.org/10.3390/challe10020033

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

Can science and technology really help solve global problems? A UN forum debates vital question

Marc Pecsteen de Buytswerve (2nd right), the Permanent Representative of Belgium to the UN and chair of the session, speaks at the plenary session during the ECOSOC Integration Segment. Also in the picture are Liu Zhenmin, Under-Secretary-General for Econ

Facebook Twitter Print Email

Science and technology offer part of the solution to climate change, inequality and other global issues, a United Nations official said on Tuesday, spotlighting the enormous potential these fields hold for achieving humanity’s common goal, of a poverty and hunger-free world by 2030.

“New advances in science and technology hold immense promises for achieving the 2030 Agenda for Sustainable Development ,” said UN Under-Secretary-General for Economic and Social Affairs, Liu Zhenmin, in his opening remarks to a session of the intergovernmental body overseeing the UN’s development work.

The 2018 Integration Segment of the Economic and Social Council ( ECOSOC ), being held from Tuesday to Thursday at UN Headquarters, brings together key stakeholders to review policies that support an integrated approach to achieving sustainable development and poverty eradication - with a focus this year on increasing resilience.

“To truly leverage the benefits of science and technology for sustainable development, we need to prioritize solutions that are pro-poor and equitable,” Mr. Liu said. “Only in this way can we ensure that no one is left behind.”

He stated that a rapidly warming planet was one of the greatest threats today, but a wide array of technological measures for climate change adaptation and mitigation can help the transition from carbon-intensive growth, towards more sustainable and resilient development.

Technologies can also help provide jobs to disadvantaged groups in society, and can help make cities smarter and more sustainable, by facilitating new transport systems and improving the management of natural resources.

To truly leverage the benefits of science and technology for sustainable development, we need to prioritize solutions that are pro-poor and equitable –  Liu Zhenmin, head of DESA

Threatened by unsustainable consumption and production patterns, the ocean is also suffering, he added. Numerous technologies have been shown to help mitigate and address these effects, such as innovations in sustainable fishing; enhanced surveillance of ocean acidification, and environmentally-sensitive forms of pollution prevention and clean-up, he added. 

To make new technology and innovation work in support of communities, any efforts must be driven on a local level, and be inclusive. 

Taking integrated approaches and working to break down barriers is of utmost urgency, too, as crises and shocks are increasingly complex and span the economic, social and environmental spheres. 

“And, finally, we need to build capacities and institutions for anticipating risk, and for planning and strategic foresight to effectively leverage technologies,” Mr. Liu said.

Also addressing the opening segment was Marc Pecsteen, Vice-President of the Economic and Social Council, who said that technology and innovation have been identified as “two key enablers, whose appropriate, efficient, equitable and sustainable use can support our efforts to build and maintain resilient societies.”

Technology: Ushering in world peace or an existential crisis?

Al Jazeera speaks to experts and grassroots workers on whether tech is a force for good or a tool in hands of powerful.

doha debates

Doha, Qatar – Can technology unlock world peace? A panel of leading global experts was asked the question at the Doha Debates forum held in Qatar last December.

The three experts – Allison Puccioni, Subbu Vincent and Ariel Conn –  deliberated on whether technology will potentially play a role in helping usher in lasting world peace or create an existential crisis for humanity.

Keep reading

Did ancient egyptians use surgery to treat brain cancer did ancient egyptians use surgery to ..., spacex’s starship completes first full test flight after surviving re-entry spacex’s starship completes first full ..., boeing’s starliner finally blasts off on its first crewed mission to iss boeing’s starliner finally blasts off on ..., california firefighters make progress against first large fire of the year california firefighters make progress ....

Speaking at the debate in Doha, Puccioni, a world-renowned practitioner of imagery intelligence, said the increasing use of technology has helped democratise information that will eventually “help create a more equitable world”.

“Today, we can access from our smartphones the kinds of information, that a few decades ago, was held only in the hands of the most clandestine echelons of elite intelligence agencies,” she told the audience in her opening remarks.

Is technology the key to world peace? We teamed up with the UN to host a debate on this very question. We were joined by Tech Writer and AI Policy Specialist Ariel Conn, Media Ethicist Subbu Vincent and Imagery Analyst Allison Puccioni for this special edition of Doha Debates. pic.twitter.com/pVRHrA8wAH — Doha Debates (@DohaDebates) December 19, 2019

A few years ago, Puccioni worked at a company where she was tasked with working with satellite imagery, YouTube videos, Twitter sentiment and other open-source information to track the Nigerian armed group Boko Haram.

“That sort of information before may have only part of intelligence agencies … but when the media has access, it has the capacity to hold governments accountable,” she added.

While it would be “impossible” to argue that challenges created by technology do not exist,  Puccioni told Al Jazeera  the availability of information to a larger audience is a good thing in the long run.

‘Putting ethics into technology’

Vincent, director for the Journalism and Media Ethics programme at Santa Clara University’s Markkula Centre for Applied Ethics, in the US, preferred to take the “middle road”.

“There’s always been grounds for optimism. But it’s been hyped up so much that people have forgotten that technology is really an amoral thing. It is designed without a moral sense,” Vincent added.

Vincent said that while technology has been used as a force for good, helping to mobilise people, it has also been used “to spread lies, disinformation and hate speech, amplify conspiracy theories, sow discord and divide people,” he told Al Jazeera.

“In the hands of good actors, it can be used for good. In the hands of bad actors, it will be used for bad,” he said.

‘Existential crisis’

The third participant in the debate, Conn, believes technology may have created an “existential crisis” for humans.

Conn, a former director of Media and Outreach for the Future of Life Institute, pointed out the connection between technology and warfare and how, throughout recent modern history, it has been used to bring about immense destruction.

“Technology is primarily developed for military prowess, for profit or both. And unfortunately, war is far more profitable than peace,” she said in her opening remarks.

Al Jazeera spoke to grassroots activists, tech workers and entrepreneurs to comment on the debate and how technology has impacted their work.

Dalia Shurrab, social media coordinator, Gaza

Dalia in Gaza

Dalia Shurrab, a social media coordinator in the occupied Gaza Strip, told Al Jazeera she agreed with Puccioni.

“As a person who is trapped under an unjust siege on the Gaza Strip, I used the internet and technology to be connected to the outer world, and I have so many chances to speak out, to represent my people and to express my opinion,” Shurrab said.

Gaza has been under an Israeli and Egyptian land, air and sea blockade since 2007, after Palestinian group Hamas won parliamentary elections in a move not recognised by its rival faction Fatah, leading to infighting within the coastal enclave that ended with Hamas running Fatah out of the Strip.

As a result, it has been extremely difficult for many Palestinians in Gaza to leave the coastal enclave, a situation which led the UN humanitarian chief to describe it as an “open-air prison”.

Shurrab works for the Gaza Sky Geeks (GSG), a tech hub in Gaza City that helps startups to grow their businesses outside of the coastal enclave of nearly two million people.

“We initiated our platform, SkyLancer Online to help people like Gazans who live under hard situations, to start learning how to be good freelancers using various freelance platforms and hunt jobs through social media platforms like LinkedIn and Twitter,” she told Al Jazeera.

“I am an optimistic person and always searching for hope … I can’t say that I disagree with Conn, but I prefer to think positively.”

Jillian C York, free speech activist, Germany

Jillian York, a free speech advocate based in the German capital of Berlin, highlighted the role of technology, particularly the internet, in amplifying the voices of marginalised communities such as LGBT.

“I think that technology is a double-edged sword in light of this [debate] question … Conn and Vincent both noted the downsides, but we also have to look at the ways in which technology, and specifically the internet, is facilitating global movements,” she noted.

Jillian York, a free speech advocate based in the German capital of Berlin

“If you’re an LGBT person in a rural area or a conservative one, you might not think anyone shares your views/experiences. But online, you can find people who are like you, who share the same causes and who have lived similar lives,” York said. 

On the other hand, she said she has “experienced first-hand how governments use technology – sometimes even that which was ostensibly built for good – to harm their citizens”.

So, while technology could certainly democratise information, it was important to note how states have abused technology to silence their own people, citing surveillance and facial recognition technology, York said.

Tahir Imin, Uighur activist, the US

Tahir Imin

US-based Uighur activist Tahir Imin pointed out how a lack of regulation has led to breaches of privacy and technology being used for mass surveillance.

Imin said that advanced tech companies, including US ones, helped China harvest data that was used for the surveillance and arrest of Uighurs.

“It’s a terrible feeling when a helpless people face a regime which is backed by the most advanced technology,” he said.

Imin believes it is important to note that some Western tech companies have “successfully pretended to be defenders of human rights and data privacy rights in the West” – while at the same time they “cooperated with authoritarian regimes like China”.

In the past year, Chinese authorities have come under criticism for applying facial recognition technology, particularly for identifying the members of the Uighur Muslim minority in Xinjiang province.

According to some estimates, up to a million Uighurs were moved to internment camps after they were racially profiled in Xinjiang and in other provinces.

“I, along with millions of other Uighurs, was forced by Chinese authorities to draw my blood, my face scanned, my voice recorded without knowing why and how they will use it,” Imin told Al Jazeera.

Nevertheless, despite his criticism of tech organisations, Imin said he was also optimistic about the future of technology which, he said, has also helped him to convey the voice of his people to audiences around the world to raise awareness around the issues of the Uighur minority in China.

Sarwar Kashani, Kashmiri writer based in New Delhi

For Sarwar Kashani, Puccioni sounded “unrealistically optimistic” on the idea that tech could lead to a more equitable world.

“I don’t think information technology will be used to unite humans, bring peace and augment democratic values. It is a myth that tech will ever be a peace enabler. Look at the people who are pulling strings,” he told Al Jazeera.

“They are modern-day Nazis, autocrats and conflict sellers,” he added. “FB, Google, Twitter and WhatsApp and other platforms are hand in glove with dictators masquerading as liberal democrats.”

sarwar

In a recent controversy, social media giant Facebook has been criticised for not acting against a post by US President Donald Trump that appears to glorify violence in the wake of George Floyd protests.

A communications ban in Kashmir was gradually lifted from January, but a curb on internet speed remains to this day, despite a global pandemic.

After India revoked Kashmir’s autonomous status last August and placed a communications blackout across the disputed region, Kashmiris like Kashani suddenly were cut off from the outside world.

“Just imagine how tough it is to live without the internet in the 21st century,” he told Al Jazeera.

However, Kashani said, even if Kashmir had free access to the internet and social media platforms, censorship would be prevalent, and the voice of his people would remain curtailed.

“I have absolutely no hope even if India allows free access to social media in Kashmir,” he laments.

“You have a partial freedom on what you write on internet [Twitter, Facebook], but you have no control on what you read. Do you think the posts criticising the government disappear on their own? It is not being done by design,” he said.

Alima Bawah, entrepreneur, Ghana

Alima Bawah

Alima Bawah, a Ghanaian entrepreneur, says she is “1,000 percent” an optimist.

The former journalist said that a lack of basic necessities and conditions like hunger could often lead to conflict.

She says technology has, in this regard, improved people’s lives for the better, particularly in rural areas.

She co-founded Cowtribe, a company that provides “on-demand and subscription-based” animal vaccine delivery and other vet services to “last-mile” farmers, those at the end of the agricultural value chain, in Ghana.

Farmers can now access essential services like agronomic foods, vaccines and other veterinary services, she said.

According to the UN food agency, agriculture contributes to 54 percent of Ghana’s GDP, providing more than 90 percent of Ghana’s food requirement.

Priscilla Madrid, feminist activist, Mexico 

Mexican feminist and writer Priscilla Madrid believes the main focus should be the people themselves, not necessarily technology.

Madrid says she has consistently witnessed how technology is used to violate the rights of women – such as men creating closed Facebook groups where they share pictures of women they have sexually harassed or assaulted, in addition to sharing tips on how to do it without being caught.

Priscilla Madrid, feminist activist, Mexico

“Also, Mexico is also one of the main producers of child pornography in the world, which is shared throughout the world through technology,” she said, adding that she wouldn’t “blame cameras and the internet for that – I blame the people holding the camera”.

According to Mexican NGO Guardianes, 60 percent of the world’s child pornography was made in Mexico, a country that has one of the highest rates of physical violence against children under 14.

While Madrid said she found Vincent’s middle-of-the-road approach the most compelling, she felt more pessimistic about technology’s role in ushering peace.

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

THE ADVANCEMENT OF SCIENCE AND TECHNOLOGY AND THE FUTURE OF HUMANITY

Profile image of Fernando A G Alcoforado

This article aims to demonstrate that humanity must prepare itself to face not only the immediate threats to its survival such as the current deadly Coronavirus pandemic and others that may arise in the future and the catastrophic climate change that may occur from the middle of the 21st century, but also the future threats represented by the progressive increase in the distance from the Moon to Earth, the collision of asteroids on the planet Earth, the explosion of supernovae with the release of gamma radiation and X-rays, the collision of the Andromeda Galaxy with the Milky Way Galaxy where the solar system is located, the death of the Sun and the end of the Universe in which we live. Both immediate and future threats will not be successfully addressed without the advancement of science and technology that is the passport to humanity's survival

Related Papers

Fernando A G Alcoforado

This article aims to demonstrate the need to adopt global strategies in the near future that are capable of eliminating or neutralizing the threats to humanity internal to planet Earth, in the 21st century, represented by the end of the world capitalist system, by the exhaustion of natural resources of the planet, by catastrophic global climate change, by the escalation of international conflicts that could lead to the war of all against all at national and international levels, and also by the pandemic of viruses similar to Coronavirus. In addition to internal threats, strategies must be developed to ensure the survival of humanity in the near future in the face of the immediate threats related to the collision of asteroids on planet Earth and the explosion of supernovae with the release of gamma and X-ray radiation and, in the long term future, represented by the distancing of the Moon in relation to the Earth, the collision of the Andromeda Galaxy with the Milky Way Galaxy where the solar system is located, the death of the Sun and the end of the Universe in which we live.

science and technology can be build and destroy humanity essay

This article aims to demonstrate the need for human colonization in other worlds to avoid the extinction of humanity based on the conclusions of our book “The threatened humanity and the strategies for its survival” published by Editora Dialética in this year of 2021 . In this book, numerous threats to the survival of humanity today and in the future in the short, medium and long term have been pointed out. Short- or medium-term threats concern: 1) the emergence of new devastating pandemics; 2) aggravation of economic, social, environmental devastation and the escalation of international conflicts with the possibility of the outbreak of nuclear wars in the 21st century; 3) natural disasters resulting from earthquakes, tsunamis and devastating volcanic eruptions; 4) possibility of collision on planet Earth by asteroids, comets or comet pieces; and, 5) cosmic ray emission especially gamma rays emitted by supernova stars. The long-term threats concern: 1) the possibility of collision on planet Earth of planets from the solar system and orphan planets that roam in outer space; 2) catastrophic consequences on Earth's environment resulting from the Moon's increase of distance from Earth; 3) death of the Sun; 4) collision of the Andromeda and the Milky Way galaxies; and, 5) the end of the Universe. All of these events, with the exception of the economic and social devastation caused by capitalism and natural disasters resulting from earthquakes and tsunamis, can lead to the extinction of the human species.

This article aims to present the strategies for the survival of humanity to deal with the threats that hover in the short, medium and long term situated inside planet Earth and those coming from outer space that are contained in our book A humanidade ameaçada e as estratégias para sua sobrevivência (The threatened humanity and the strategies for its survival), subtitled Como salvar a humanidade das ameaças à sua extinção (How to save humanity from the threats to its extinction).

This article seeks to present the future of the Universe, as well as to point out the measures that lead to the survival of humanity in the face of the numerous threats that may occur at the level of the solar system and the Universe as a whole.

This article aims to present the five major scientific and technological challenges for human beings to carry out space and interstellar travel in the face of the need for human colonization in other worlds to prevent the extinction of humanity based on the conclusions of our book “The humanity threatened and the strategies for its survival” published by Editora Dialética in this year of 2021 . In this book, it was demonstrated: 1) the need to adopt escape strategies of humans to habitable places within the solar system (Mars, the moon of Saturn, Titan, and of Jupiter, Callisto) where space colonies would be implanted in the case of large volcano eruptions like those 250 million years ago that ended a life cycle on Earth, if Earth is threatened by gamma-ray emission from supernova stars, when Earth's climate becomes lethal to human life with continuous distancing of the Moon from Earth and if orphan planets collide with planet Earth; 2) the need to adopt escape strategies for human beings to habitable places outside the solar system, such as the exoplanet "Proxima b" orbiting a star that is part of the Alpha Centauri planetary system, the closest to the solar system, located 4.2 years -light distance from Earth if the collision on planet Earth of planets in the solar system occurs and before the death of the Sun; 3) the need to adopt human beings' escape strategies to habitable locations in other closer galaxies to save humanity before the Andromeda and Milky Way galaxies collide such as the Big Dog Dwarf Galaxy located 25,000 light-years from Earth or in the Large Magellanic Cloud, located 163,000 light-years from Earth; and, 4) the need to adopt escape strategies from human beings to parallel universes before the end of our Universe.

This article aims to present the origin and evolution of Universe, Sun and Earth as well as alternative solutions for the survival of humanity with the end of Earth planet, Sun and Universe.

This article aims to present possible strategies to save humanity with the death of the Sun and the collision of the Andromeda and Via Lactea galaxies where the solar system is located. It is scientifically known that all life on Earth will disappear when our Sun reaches the end of its existence within 4 billion years by becoming a red giant that will swallow the Earth. Four years ago, NASA scientists revealed that the collision of our galaxy Via Lactea with Andromeda-the closest neighbor-is inevitable and will happen in approximately four billion years.

This article aims to present what was said by the late scientist Stephen Hawking who stated in 2018 that the human species could be driven to extinction in 100 years and that, due to this, he would force human beings to leave the Earth, as well as to demonstrate that the threats of extinction of the human species cited by Hawking can be faced without the need for human beings to escape from Earth.

Cosmology is the branch of astronomy that studies the structure and evolution of the Universe as a whole, worrying both about its origin and its evolution. This article aims to present the scientific advances that need to be made in cosmology to contribute to the adoption of technological solutions to protect humanity from threats to its extinction coming from outer space. The future of humanity depends on the success achieved in advancing knowledge about the Universe, especially 10 major cosmological issues that need to be clarified so that humanity can, with scientific knowledge, adopt measures to protect itself from threats to its survival and seek locations in or outside the solar system that could be habitable by humans.

This article aims to present our book The threatened humanity and the strategies for its survival, which is subtitled How to save humanity from the threats to its extinction, which informs what the threats to the survival of humanity are and proposes strategies that aim overcome them. This article only deals with the threats that hover against humanity in the short, medium and long term. The strategies will be dealt with in another article to be published. Threats to humanity's survival are countless. These threats are those situated inside planet Earth and coming from outer space. The threats existing inside planet Earth are those caused by human beings themselves, such as pandemics, the outbreak of a new world war with the use of nuclear weapons and catastrophic global climate change, and those caused by planet Earth itself with earthquakes, eruption of volcanoes and tsunamis. Threats from outer space are related to the collision on planet Earth of asteroids, comets and comet pieces, solar system planets and orphan planets, the continuing distancing of the Moon from Earth, the emission of cosmic rays, the end of the Sun, the collision of the Andromeda and the Milky Way galaxies and the end of the Universe.

Loading Preview

Sorry, preview is currently unavailable. You can download the paper by clicking the button above.

RELATED PAPERS

Tom Lombardo

Luciana Banu

Brayan Jahir Carmona

Oksana Sharonova

noor ul ain

Ignacio De La Lama

A National Clearinghouse for Higher Education Space and Earth Sciences

Andrew Fraknoi

Len Robertson

Kshitij Bane

Elbek Kurbonov

Ulrich de Balbian

Teresa Grabińska

Odessa Astronomical Publications

Zainab Nawaz

Jacob Sibelo

Ram L. Pandey Vimal

Edgar del Castillo

Sravankumar Kota

Kevin Escalante

Robert Wolfe

Allan Pineda

ANGIE TATIANA MONTAÑEZ REYES

Afrianto T . L . Sogen

Angelo Stagnaro

Felipe Bueno

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024
  • svg]:stroke-accent-900">

Can AI escape our control and destroy us?

By Mara Hvistendahl

Posted on May 20, 2019 9:30 PM EDT

13 minute read

“It began three and a half billion years ago in a pool of muck, when a molecule made a copy of itself and so became the ultimate ancestor of all earthly life. It began four million years ago, when brain volumes began climbing rapidly in the hominid line. Fifty thousand years ago with the rise of  Homo sapiens. Ten thousand years ago with the invention of civilization. Five hundred years ago with the invention of the printing press. Fifty years ago with the invention of the computer. In less than thirty years, it will end.”

Jaan Tallinn stumbled across these words in 2007, in an online essay called “Staring into the Singularity.” The “it” is human civilization. Humanity would cease to exist, predicted the essay’s author, with the emergence of superintelligence, or AI that surpasses the human intellect in a broad array of areas.

Tallinn, an Estonia-born computer programmer, has a background in physics and a propensity to approach life like one big programming problem. In 2003, he had co-founded Skype, developing the backend for the app. He cashed in his shares after eBay bought it two years later, and now he was casting about for something to do. “Staring into the Singularity” mashed up computer code, quantum physics, and ­ Calvin and Hobbes quotes. He was hooked.

Tallinn soon discovered that the essay’s author, self-taught theorist Eliezer Yudkowsky, had written more than 1,000 articles and blog posts, many of them devoted to superintelligence. Tallinn wrote a program to scrape Yudkowsky’s writings from the internet, order them chronologically, and format them for his iPhone. Then he spent the ­better part of a year reading them.

The term “artificial intelligence,” or the simulation of intelligence in computers or machines, was coined back in 1956, only a decade after the creation of the first electronic digital computers. Hope for the field was initially high, but by the 1970s, when early predictions did not pan out, an “ AI winter ” set in. When Tallinn found Yudkowsky’s essays, AI was undergoing a renaissance. Scientists were developing AIs that excelled in specific areas, such as winning at chess, cleaning the kitchen floor, and recognizing human speech. (In 2007, the resounding win at ­Jeopardy! of IBM’s Watson was still four years away, while the triumph at Go of DeepMind’s AlphaGo was eight years off.) Such “narrow” AIs, as they’re called, have superhuman capabilities, but only in their specific areas of dominance. A chess-playing AI can’t clean the floor or take you from point A to point B. But super-intelligent AI, Tallinn came to believe, will combine a wide range of skills in one entity. More darkly, it also might use data generated by smartphone-toting humans to excel at social manipulation.

Reading Yudkowsky’s articles, Tallinn became convinced that super­intelligence could lead to an explosion or “breakout” of AI that could threaten human existence—that ultrasmart AIs will take our place on the evolutionary ladder and dominate us the way we now dominate apes. Or, worse yet, exterminate us.

After finishing the last of the essays, Tallinn shot off an email to Yudkowsky—all lowercase, as is his style. “i’m jaan, one of the founding engineers of skype,” he wrote. Eventually he got to the point: “i do agree that…preparing for the event of general AI surpassing human intelligence is one of the top tasks for humanity.” He wanted to help. When he flew to the Bay Area for other meetings soon after, he met Yudkowsky at a Panera Bread in Millbrae, California, near where he lives. Their get-together stretched to four hours. “He actually, genuinely understood the underlying concepts and the details,” Yudkowsky recalls. “This is very rare.” Afterward, Tallinn wrote a check for $5,000 to the Singularity Institute for Artificial Intelligence, the nonprofit where Yudkowsky was a research fellow. (The organization changed its name to Machine Intelligence Research Institute, or MIRI, in 2013.) Tallinn has since given it more than $600,000.

The encounter with Yudkowsky brought Tallinn purpose, sending him on a mission to save us from our own creations. As he connected on the issue with other theorists and computer scientists, he embarked on a life of travel, giving talks around the world on the threat posed by superintelligence. Mostly, though, he began funding research into methods that might give humanity a way out: so-called friendly AI. That doesn’t mean a machine or agent is particularly skilled at chatting about the weather, or that it remembers the names of your kids—though super-intelligent AI might be able to do both of those things. It doesn’t mean it is motivated by altruism or love. A common fallacy is assuming that AI has human urges and values. “Friendly” means something much more fundamental: that the machines of ­tomorrow will not wipe us out in their quest to attain their goals.

Nine years after his meeting with ­Yudkowsky, Tallinn joins me for a meal in the dining hall of Cambridge University’s Jesus College. The churchlike space is bedecked with stained-glass windows, gold molding, and oil paintings of men in wigs. Tallinn sits at a heavy mahogany table, wearing the casual garb of Silicon Valley: black jeans, T-shirt, canvas sneakers. A vaulted timber ceiling extends high above his shock of gray-blond hair.

At 46, Tallinn is in some ways your textbook tech entrepreneur. He thinks that thanks to advances in science (and provided AI doesn’t destroy us), he will live for “many, many years.” His concern about superintelligence is common among his cohort. PayPal co-founder Peter Thiel’s foundation has given $1.6 million to MIRI, and in 2015, Tesla founder Elon Musk donated $10 million to the Future of Life Institute, a technology safety organization in Cambridge, Massachusetts. Tallinn’s entrance to this rarefied world came behind the Iron Curtain in the 1980s, when a classmate’s father with a government job gave a few bright kids access to mainframe computers. After Estonia became independent, he founded a video-game company. Today, Tallinn still lives in its capital city—which by a quirk of etymology is also called Tallinn—with his wife and the youngest of his six kids. When he wants to meet with researchers, he ­often just flies them to the Baltic region.

His giving strategy is methodical, like almost everything else he does. He spreads his money among 11 organizations, each working on different approaches to AI safety, in the hope that one might stick. In 2012, he co-founded the Cambridge Centre for the Study of Existential Risk (CSER) with an initial outlay of close to $200,000.

Existential risks—or X-risks, as Tallinn calls them—are threats to humanity’s survival. In addition to AI, the 20-odd researchers at CSER study climate change, nuclear war, and bioweapons. But to Tallinn, the other disciplines mostly help legitimize the threat of runaway artificial intelligence. “Those are really just gateway drugs,” he tells me. Concern about more widely accepted threats, such as climate change , might draw people in. The horror of super-intelligent machines taking over the world, he hopes, will convince them to stay. He is here now for a conference because he wants the aca­demic community to take AI safety seriously.

Our dining companions are a random assortment of conference-goers, including a woman from Hong Kong who studies robotics and a British man who graduated from Cambridge in the 1960s. The older man asks every­body at the table where they attended university. (Tallinn’s answer, Estonia’s University of Tartu, does not impress him.) He then tries to steer the conversation ­toward the news. Tallinn looks at him blankly. “I am not interested in near-term risks,” he says.

Tallinn changes the topic to the threat of superintelligence. When not talking to other programmers, he defaults to metaphors, and he runs through his suite of them now: Advanced AI can dispose of us as swiftly as humans chop down trees. Superintelligence is to us what we are to gorillas. Inscribed in Latin above his head is a line from Psalm 133: “How good and how pleasant it is for brothers to dwell together in unity.” But unity is far from what Tallinn has in mind in a future ­containing a rogue superintelligence.

An AI would need a body to take over, the older man says. Without some kind of physical casing, how could it possibly gain physical control? Tallinn has another metaphor ready: “Put me in a basement with an internet connection, and I could do a lot of damage,” he says. Then he takes a bite of risotto.

Whether a Roomba or one of its world-​dominating descendants, an AI is driven by outcomes. Programmers assign these goals, along with a series of rules on how to pursue them. Advanced AI wouldn’t necessarily need to be given the goal of world domination in order to achieve it—it could just be accidental. And the history of computer programming is rife with small errors that sparked catastrophes. In 2010, for example, a trader working for the mutual-fund company Waddell & Reed sold thousands of futures contracts. The firm’s software left out a key variable from the algorithm that helped execute the trade. The result was the trillion-dollar U.S. “flash crash.”

The researchers Tallinn funds believe that if the reward structure of a superhuman AI is not properly programmed, even benign objectives could have insidious ends. One well-known example, laid out by Oxford University philosopher Nick Bostrom in his book Super­intelligence , is a fictional agent directed to make as many paper clips as possible. The AI might decide that the atoms in human bodies would be put to better use as raw material for them.

Tallinn’s views have their share of detractors, even among the community of people concerned with AI safety. Some object that it is too early to worry about restricting super-intelligent AI when we don’t yet understand it. Others say that focusing on rogue technological actors diverts attention from the most urgent problems facing the field, like the fact that the majority of algorithms are designed by white men, or based on data biased toward them. “We’re in danger of building a world that we don’t want to live in if we don’t address those challenges in the near term,” says Terah Lyons, executive director of the Partnership on AI, a multistakeholder organization focused on AI safety and other issues. (Several of the institutes Tallinn backs are members.) But, she adds, some of the near-term challenges facing ­researchers—such as weeding out algorithmic bias—are precursors to ones that humanity might see with super-intelligent AI.

Tallinn isn’t so convinced. He counters that super-intelligent AI brings unique threats. Ultimately, he hopes that the AI community might follow the lead of the anti-nuclear movement in the 1940s. In the wake of the bombings of Hiroshima and Nagasaki, scientists banded together to try to limit further nuclear testing. “The Manhattan Project scientists could have said, ‘Look, we are doing innovation here, and innovation is always good, so let’s just plunge ahead,’” he tells me. “But they were more responsible than that.”

Tallinn warns that any approach to AI safety will be hard to get right. If an AI is sufficiently smart, he explains, it might have a better understanding of the constraints than its creators do. Imagine, he says, “waking up in a prison built by a bunch of blind 5-year-olds.” That is what it might be like for a super-intelligent AI that is confined by humans.

Yudkowsky, the theorist, found evidence this might be true when, starting in 2002, he conducted chat sessions in which he played the role of an AI enclosed in a box, while a rotation of other people played the gatekeeper tasked with keeping the AI in. Three out of five times, Yudkowsky—a mere mortal—says he convinced the gatekeeper to release him. His experiments have not discouraged researchers from trying to design a better box, however.

The researchers that Tallinn funds are pursuing a broad variety of strategies, from the practical to the seemingly far-fetched. Some theorize about boxing AI, either physically, by building an actual structure to ­contain it, or by programming in limits to what it can do. Others are trying to teach AI to adhere to ­human values. A few are working on a last-ditch off switch. One researcher who is delving into all three is mathematician and philosopher Stuart Armstrong at the University of Oxford’s Future of ­Humanity Institute, which Tallinn calls “the most interesting place in the universe.” (Tallinn has given FHI more than $310,000.) Armstrong is one of the few researchers in the world who ­focuses full time on AI safety.

I meet him for coffee one afternoon in a cafe in Oxford. He wears a rugby shirt unbuttoned at the collar, and has the look of someone who spends his life behind a screen, with a pale face framed by a mess of sandy hair. He peppers his explanations with a disorienting mixture of ­popular-​culture references and math. When I ask him what it might look like to succeed at AI safety, he says: “Have you seen The Lego Movie ? Everything is awesome.”

One strain of Armstrong’s research looks at a specific ­approach to boxing called an “oracle” AI. In a 2012 paper with Nick Bostrom, who co-founded FHI, he proposed not only walling off superintelligence in a holding tank—a physical structure—but also restricting it to answering questions, like a really smart Ouija board. Even with these boundaries, an AI would have immense power to reshape the fate of humanity by subtly manipulating its interrogators. To reduce the possibility of this happening, Armstrong has proposed time limits on conversations, or banning questions that might upend the current world order. He also has suggested giving the oracle proxy measures of human survival, such as the Dow Jones Industrial Average or the number of people crossing the street in Tokyo, and telling it to keep these steady.

Ultimately, Armstrong believes, it could be necessary to create, as he calls it in one paper, a “big red off button”: either a physical switch, or a mechanism programmed into an AI to automatically turn itself off in the event of a breakout. But designing such a switch is far from easy. It’s not just that an advanced AI interested in self-preservation could prevent the button from being pressed. It also could become curious about why humans devised the button, activate it to see what happens, and render itself useless. In 2013, a programmer named Tom Murphy VII designed an AI that could teach itself to play Nintendo Entertainment System games. Determined not to lose at Tetris , the AI simply pressed pause—and kept the game frozen. “Truly, the only winning move is not to play,” ­Murphy observed wryly, in a paper on his creation.

For the strategy to succeed, an AI has to be uninterested in the ­button, or, as Tallinn puts it, “it has to assign equal value to the world where it’s not existing and the world where it’s existing.” But even if researchers can achieve that, there are other challenges. What if the AI has copied itself several thousand times across the internet?

The approach that most excites researchers is finding a way to make AI adhere to human ­values—not by programming them in, but by teaching AIs to learn them. In a world dominated by partisan politics, people often dwell on the ways in which our principles differ. But, Tallinn notes, humans have a lot in common: “Almost everyone values their right leg. We just don’t think about it.” The hope is that an AI might be taught to discern such immutable rules.

In the process, an AI would need to learn and appreciate humans’ less-​than-​­logical side: that we often say one thing and mean another, that some of our preferences conflict with others, and that people are less reliable when drunk. But the data trails we all leave in apps and social media might provide a guide. Despite the challenges, Tallinn believes, we must try because the stakes are so high. “We have to think a few steps ahead,” he says. “Creating an AI that doesn’t share our interests would be a horrible mistake.”

On Tallinn’s last night in Cambridge, I join him and two researchers for dinner at a British steakhouse. A waiter seats our group in a white-washed cellar with a cave-like atmosphere. He hands us a one-page menu that offers three different kinds of mash. A couple sits down at the table next to us, and then a few minutes later asks to move elsewhere. “It’s too ­claustrophobic,” the woman complains. I think of Tallinn’s comment about the damage he could wreak if locked in a basement with nothing but an internet connection. Here we are, in the box. As if on cue, the men ­contemplate ways to get out.

Tallinn’s guests include former genomics researcher Seán Ó hÉigeartaigh, who is ­CSER’s executive director, and Matthijs Maas, an AI policy researcher at the University of Copenhagen. They joke about an idea for a nerdy action flick titled Super­intelligence vs. Blockchain! , and discuss an online game called Universal Paperclips , which riffs on the scenario in Bostrom’s book. The exercise involves repeatedly clicking your mouse to make paper clips. It’s not exactly flashy, but it does give a sense for why a machine might look for more-­expedient ways to produce office supplies.

Eventually, talk shifts toward bigger questions, as it often does when Tallinn is present. The ultimate goal of AI-safety research is to create machines that are, as Cambridge philosopher and CSER co-founder Huw Price once put it, “ethically as well as cognitively superhuman.” Others have raised the question: If we don’t want AI to dominate us, do we want to dominate it? In other words, does AI have rights? Tallinn says this is needless anthropomorphizing. It assumes that intelligence equals consciousness—a misconception that annoys many AI researchers. Earlier in the day, CSER researcher Jose ­Hernandez-​­Orallo joked that when speaking with AI researchers, consciousness is “the C-word.” (“And ‘free will’ is the F-word,” he added.)

RELATED: What it’s really like working as a safety driver in a self-driving car

In the cellar now, Tallinn says that consciousness is beside the point: “Take the example of a thermostat. No one would say it is conscious. But it’s really inconvenient to face up against that agent if you’re in a room that is set to negative 30 degrees.”

Ó hÉigeartaigh chimes in. “It would be nice to worry about consciousness,” he says, “but we won’t have the luxury to worry about consciousness if we haven’t first solved the technical safety challenges.”

People get overly preoccupied with what ­super-​­intelligent AI is, Tallinn says. What form will it take? Should we worry about a single AI taking over, or an army of them? “From our perspective, the important thing is what AI does,” he stresses. And that, he ­believes, may still be up to humans—for now.

This article was originally published in the Winter 2018 Danger issue of Popular Science.

Latest in AI

New technology could shrink bulky mri machines new technology could shrink bulky mri machines.

By Andrew Paul

How drones and AI could help farmers fight a stink bug invasion How drones and AI could help farmers fight a stink bug invasion

By Mack DeGeurin

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

  • Experts Optimistic About the Next 50 Years of Digital Life
  • 3. Humanity is at a precipice; its future is at stake

Table of Contents

  • 1. Themes about the next 50 years of life online
  • 2. Internet pioneers imagine the next 50 years
  • 4. The internet will continue to make life better
  • 5. Leading concerns about the future of digital life
  • About this canvassing of experts
  • Acknowledgments

The following sections share selections of comments from technology experts and futurists who elaborate on the ways internet use has shaped humanity over the past 50 years and consider the potential future of digital life. They are gathered under broad, overarching ideas, rather than being tied to the specific themes highlighted above. Many of the answers touch on multiple aspects of the digital future and are not neatly boxed as addressing only one part of the story. Some responses are lightly edited for style and readability.

The cautious optimism expressed by many of the experts canvassed for this report grew out of a shared faith in humanity. Many described the current state of techlash as a catalyst that will lead to a more inclusive and inviting internet. Some of these comments are included below.

Micah Altman , a senior fellow at the Brookings Institution and head scientist in the program on information science at MIT Libraries, wrote, “The late historian Melvin Kranzberg insightfully observed, ‘Technology is neither good nor bad; nor is it neutral.’ In the last 50 years, the internet has been transformative and disruptive. In the next 50, information, communication and AI technology show every sign of being even more so. Whether historians of the future judge this to be good or bad will depend on whether we can make the societal choice to embed democratic values and human rights into the design and implementation of these systems.”

Juan Ortiz Freuler , a policy fellow, and Nnenna Nwakanma , the interim policy director for Africa at the Web Foundation, wrote, “Unless we see a radical shift soon, the internet as we know it will likely be recalled as a missed opportunity. History will underline that it could have been the basis for radically inclusive societies, where networked communities could actively define their collective future. A tool that could have empowered the people but became a tool for mass surveillance and population control. A tool that could have strengthened the social fiber by allowing people to know each other and share their stories, but out of it grew huge inequalities between the connected and not-connected, both locally and across countries.”

Steven Miller , vice provost and professor of information systems at Singapore Management University, said, “Overall, the future will be mostly for the better. And if it is not mostly for the better, the reasons will NOT be due to the technology, per se. The reasons will be due to choices that people and society make – political choices, choices per how we govern society, choices per how we attend to the needs of our populations and societies. These are people and political issues, not technology ones. These are the factors that will dominate whether people are better off or worse off.”

Paul Jones , professor of information science at the University of North Carolina, Chapel Hill, responded, “While the internet was built from the beginning to be open and extensible, it relies on communities of trust. As we are seeing this reliance has strong downsides – phishing, fake news, over-customization and tribalism for starters. Adding systems of trust, beginning with the promises of blockchain, will and must address this failing. Will the next internet strengthen the positives of individualism, of equality and of cooperation or will we become no more than Morlocks and Eloi? I remain optimistic as we address not only the engineering challenges, but also the human and social challenges arising. All tools, including media, are extensions of man. ‘We shape our tools and thereafter our tools shape us,’ as McLuhan is credited for noticing. Nothing could be more true of the next internet and our lives in relation to information access. Can we create in ways now unknown once we are less reliant on memorization and calculation? Will we be better at solving the problems we create for ourselves? I answer with an enormous ‘Yes!’ but then I’m still waiting for the personal jetpack I was promised as a child.”

Ray Schroeder , associate vice chancellor for online learning at the University of Illinois, Springfield, wrote, “On the scale of the discovery of fire, the wheel and cultivation of crops, the interconnection of humans will be judged as a very important step toward becoming the beings of the universe that we are destined to be.”

Charlie Firestone , communications and society program executive director and vice president at the Aspen Institute, commented, “Fifty years from now is science fiction. There really is no telling with quantum computing, AI, blockchain, virtual reality, broadband (10G?), genetic engineering, robotics and other interesting developments affecting our lives and environments…. It’s just too far ahead to imagine whether we will be in a digital feudal system or highly democratic. But I do imagine that we could be on our way to re-speciation with genetics, robotics and AI combined to make us, in today’s image, superhuman. I understand that there are many ways that the technologies will lead to worse lives, particularly with the ability of entities to weaponize virtually any of the technologies and displace jobs. However, the advances in medicine extending lives, the ability to reduce consumption of energy, and the use of robotics and AI to solve our problems are evident. And we have to believe that our successors will opt for ways to improve and extend the human species rather than annihilate it or re-speciate.”

Edward Tomchin , a retiree, said, “Human beings, homo sapiens, are a most remarkable species which is easily seen in a comparison with how far we have come in the short time since we climbed down out of the trees and emerged from our caves. The speed with which we are currently advancing leaves the future open to a wide range of speculation, but we have overcome much in the past and will continue to do so in pursuit of our future. I’m proud of my species and confident in our future.”

Garland McCoy , founder and chief development officer of the Technology Education Institute, wrote, “I hope in 50 years the internet will still be the Chinese fireworks and not become the British gunpowder.”

Daniel Riera , a professor of computer science at Universitat Oberta de Catalunya, commented, “Everything will be connected; automation will be everywhere; most of the jobs will be done by machines. Society will have fully changed to adapt to the new reality: Humans will need to realize the importance of sustainability and equality. In order to reach this point, technology, ethics, philosophy, laws and economics, among other fields, will have done a big joint effort. We have a very good opportunity. It will depend on us to take advantage of it. I hope and trust we will. Otherwise, we will disappear.”

Geoff Livingston , author and futurist, commented, “This is a great period of transition. The internet forced us to confront the worst aspects of our humanity. Whether we succumb or not to those character defects as a society remains to be seen.”

Brad Templeton , chair for computing at Singularity University, software architect and former president of the Electronic Frontier Foundation, responded, “It’s been the long-term arc of history to be better. There is the potential for nightmares, of course, as well as huge backlashes against the change, including violent ones. But for the past 10,000 years, improvement has been the way to bet.”

Mary Chayko , author of “Superconnected: The Internet, Digital Media, and Techno-Social Life” and professor in the Rutgers School of Communication and Information, said, “The internet’s first 50 years have been tech-driven, as a host of technological innovations have become integrated into nearly every aspect of everyday life. The next 50 years will be knowledge-driven, as our understandings ‘catch up’ with the technology. Both technology and knowledge will continue to advance, of course, but it is a deeper engagement with the internet’s most critical qualities and impacts – understandings that can only come with time, experience and reflection – that will truly come to characterize the next 50 years. We will become a ‘smarter’ populace in all kinds of ways.”

Yvette Wohn , director of the Social Interaction Lab and expert on human-computer interaction at New Jersey Institute of Technology, commented, “Technology always has and always will bring positive and negative consequences, but the positives will be so integral to our lives that going back will not be an option. Cars bring pollution, noise and congestion but that doesn’t mean we’re going back to the horse and buggy. We find newer solutions, innovation.”

Bob Frankston , software innovation pioneer and technologist based in North America, wrote, “For many people any change will be for the worse because it is unfamiliar. On the positive side, the new capabilities offer the opportunity to empower people and provide solutions for societal problems as long as we don’t succumb to magical thinking.”

Matt Mason , a roboticist and the former director of the Robotics Institute at Carnegie Mellon University, wrote, “The new technology will present opportunities for dramatic changes in the way we live. While it is possible that human society will collectively behave irrationally and choose a path detrimental to its welfare, I see no reason to think that is the more likely outcome.”

Stuart A. Umpleby , a professor and director of the research program in social and organizational learning at George Washington University, wrote, “In the future people will live increasingly in the world of ideas, concepts, impressions and interpretations. The world of matter and energy will be mediated by information and context. Already our experiences with food are mediated by thoughts about calories, safety, origins, the lives of workers, etc. Imagine all of life having these additional dimensions. Methods will be needed to cope with the additional complexity.”

John Markoff , fellow at the Center for Advanced Study in Behavioral Sciences at Stanford University and author of “Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots,” wrote, “Speculation on the nature of society over timespans of half a century falls completely into the realm of science fiction. And my bet is that science fiction writers will do the best job of speculating about society a half century from now. As someone who has written about Silicon Valley for more than four decades I have two rules of thumb: technologies aren’t real until they show up at Fry’s Electronics and the visionaries are (almost) always wrong. I actually feel like the answer might as well be a coin toss. I chose to be optimistic simply because over the past century technology has improved the quality of human life.”

An exec utive director for a major global foundation wrote, “The internet will rank among the major technology movements in world history – like gunpowder, indoor plumbing and electricity. And like all of them (with the possible exception of indoor plumbing), its eventual weaponization should have been less of a surprise.”

Bryan Johnson , founder and CEO of Kernel, a leading developer of advanced neural interfaces, and OS Fund, a venture capital firm, said, “Humans play prediction games, but the exercise is inherently unproductive. A more useful exercise would be to think about what deeply influential technology can we invest our current time in that will give us the tools we need to thrive in such a highly complex future. Forecasting to 2050 is thought junk food. It is what people most like to daydream about, but is not what we should think about for the health of the species and planet.”

Ethics and the bigger picture loom large in the digital future

Optimistic and pessimistic respondents alike agree that human agency will affect the trajectory of digital life. Many respondents said their biggest concern is that everyone’s future in the digital age depends upon the ability of humans to privilege long-term societal advancement over short-term individual gain.

William Uricchio , media scholar and professor of comparative media studies at MIT, commented, “‘Changes in digital life’ are human-driven; technology will only amplify the social structures that created it. My pessimism ensues from the polarization of power, knowledge and wealth that characterizes much of the world at the start of the 21st century, and by the rapidly growing pressures evident in population growth and ecological degradation. Digital technologies have the capacity to be terrific enablers – but the question remains, enablers of what? Of whose vision? Of what values? These, it seems to me, are the defining questions.”

Jonathan Swerdloff , consultant and data systems specialist for Driven Inc., wrote, “In the first 50 years of connected internet, humanity rose from no access at all to always-on, connected devices on their person tracking their life signs. I expect the next 50 years will see devices shrink to tiny sizes and be integrated within our very persons. Then there will be two inflection points. The first will be a split between the technology haves and have-nots. Those who have the technology will benefit from it in ways that those who do not are unable to. The more advanced technology gets the more this will be the case. While I would like to believe in a utopic vision of AI fighting climate change and distributing food and wealth so that nobody goes hungry – the ‘Jetsons’ future, if you will – history doesn’t support that view. The second will be a moral evolution. Privacy as conceived in the era before the advent of the internet is nearly dead despite attempts by the European Union and California to hold back the tide. The amount of information people give up about their most private lives is growing rapidly. A commensurate evolution of morals to keep up with the technological developments will be required to keep up or chaos will ensue. Moral structures developed when people could hide their genetics, personal habits and lives at home are not aligned with an always-on panopticon that knows what someone is doing all day every day. Human nature is nearly immutable – morals will need to catch up…. Anything that happens in society can be magnified by technology. I hope that my pessimism is wrong. There is some evidence of the moral evolution already – Millennials and the generation behind them freely share online in ways which Boomers and Gen X look at as bizarre. Whether that will lead to a significant moral backlash in 50 years remains to be seen.”

Susan Mernit , executive director, The Crucible, and co-founder and board member of Hack the Hood, responded, “I am interested in how wearable, embedded and always-on personal devices and apps will evolve. Tech will become a greater helping and health-management tool, as well as take new forms in terms of training and educating humans. But I wonder how much humans’ passivity will increase in an increasingly monitored and always-on universe, and I wonder how much the owners and overlords of this tech will use it to segment and restrict people’s knowledge, mobility and choices. I want to believe tech’s expansion and evolution will continue to add value to people’s lives, but I am afraid of how it can be used to segment and restrict groups of people, and how predictive modeling can become a negative force.”

Charles Ess , a professor expert in ethics with the Department of Media and Communication, University of Oslo, Norway, said, “My overall sense of the emerging Internet of Things and its subsequent evolutions is of an increasing array of technologies that are ever more enveloping but also ever more invisible (advanced technology is magic, to recall Arthur C. Clarke), thereby making it increasingly difficult for us to critically attend to such new developments and perhaps re-channel or obviate them when ethically/socially indicated.”

Stavros Tripakis , an associate professor of computer science at Aalto University (Finland) and adjunct at the University of California, Berkeley, wrote, “Misinformation and lack of education will continue and increase. Policing will also increase. Humanity needs a quantum leap in education (in the broad sense) to escape from the current political and economic state. Fifty years is not enough for this to happen.”

Kenneth R. Fleischmann , an associate professor at the University of Texas, Austin School of Information, responded, “The key questions are, ‘Which individuals?’ and ‘Better/worse in which ways?’ The impacts on different people will be different, and each person will interpret these changes differently. One major factor is what people value or consider important in life. If people value privacy and they are subject to a digital panopticon then, in that way, their lives may be worse; however, they also likely value convenience, and may find substantial improvements in that regard. Different people will make that tradeoff differently depending on what they value. So, understanding the impact of the technology is not only about predicting the future of technology, it is also about predicting the future of what we value, and these two considerations are of course mutually constitutive, as technologies are shaped by values, and at the same time, over time (especially generations), technologies shape values.”

Justin Reich , executive director of MIT Teaching Systems Lab and research scientist in the MIT Office of Digital Learning, responded, “Shakespeare wrote three kinds of plays: the tragedies where things got worse, the comedies where things got better, and the histories, with a combination of winners and losers. Technological advances do little to change net human happiness, because so much of happiness is determined by relative comparisons with neighbors. The primary determinants of whether life for people improves will be whether we can build robust social institutions that distribute power widely and equally among people, and whether those institutions support meaningful relationships among people.”

Michiel Leenaars , director of strategy at NLnet Foundation and director of the Internet Society’s Netherlands chapter, responded, “What the internet will look like in 50 years will greatly depend on how we act today. Tim Berners-Lee in his 2018 Turing speech referred to the current situation as ‘dystopian,’ and this seems like an adequate overall description. The industry is dominated by extremely pervasive but very profitable business practices that are deeply unethical, driven by perverse short-term incentives to continue along that path. A dark mirror version of the internet on an extractive crash course with democracy and the well-being of humanity at large itself. That is a future I’m not very eager to extrapolate even for another 10 years. My target version of the internet in 50 years – the one I believe is worth pursuing – revolves around open source, open hardware, open content as well as in helping people live meaningful lives supported by continuous education and challenging ideas. Permissionless innovation is a necessary precondition for serving the human potential, but so are critical reflection and a healthy social dialogue avoiding personalized bubbles, AI bias and information overload. The openness of the web and the mobile ecosystem in particular are abysmal, and attention and concentration are endangered human traits. But that can be reversed, I believe. Every day we can start to re-imagine and re-engineer the internet. The information age can and should be an era that brings out the best in all of us, but this will not happen by itself. So, I hope and believe the internet in 50 years is going to be as challenging as the early internet – and hard work for many people that want to see this future emerge.”

Simon Biggs , a professor of interdisciplinary arts at the University of Edinburgh, said, “Given our history as a species, and our current behavio with the internet, I suspect that our activities (within a more advanced form of the internet) will consist of virtual simulated sex (in the form of interactive pornography – so not really sex but power-play) and killing virtual players in massive online gaming environments (more power-play). In that sense things will be similar to how they are now. Given current trends it is likely that the internet will no longer be ‘the internet,’ in the sense that it was intended as the network of all networks. Networked information and communications technology will be territorialized, broken up and owned, in walled environments (this process is already well advanced). Access will be privileged, not for the consumer but for the producer. The first period of the internet was marked by a democratization of access to the means of production, but this will not be the case in the future. The vast bulk of internet users will be passive consumers who are offered an illusion of agency in the system to deliver them as a resource to those who profit from consumer playbour . We already see this with Facebook and other companies. The manner in which user data from Facebook and elsewhere has been exploited in the democratic process to affect the outcomes to the benefit of those paying for the data is indicative of where the internet is going. I expect the internet to be far more pervasive than it is today, our experience of our lived life mediated at all times. The only question is to what degree our experiential life will be mediated. I suspect it will be more or less total by 2030. Primarily, my reasoning is predicated on the expectation that human behaviour will lead to negative consequences flowing from our technological augmentation. These consequences could be quite severe. Do I think our survival as a species is threatened by our technological evolution? Yes. Do I think we will survive? Probably, because we are a tenacious animal. Do I think it will be worth surviving in a world like that? Probably not. Do I think the world would be better off if, as a species, we were to not survive? Absolutely. That is one thing we might hope for – that we take ourselves out, become extinct. Even if we are replaced by our machines the world is likely to be a better place without us.”

Robert Bell , co-founder of Intelligent Community Forum, had a different view from Biggs, predicting, “We created something that became a monster and then learned to tame the monster.”

Jeff Johnson , computer science professor at the University of San Francisco, previously with Xerox, HP Labs and Sun Microsystems, responded that it is important to take a broader view when assessing what may be coming next. He wrote, “Technological change alone will not produce significant change in people’s lives. What happens alongside technological change will affect how technological change impacts society. The future will bring much-improved speech-controlled user interfaces, direct brain-computer interfaces, bio-computing, advances in AI and much higher bandwidth due to increases in computer power (resulting from quantum computing). Unless national political systems around the world change in ways to promote more equitable wealth distribution, the future will also bring increased stratification of society, fueled by loss of jobs and decreased access to quality education for lower socio-economic classes. Finally, rising sea levels and desertification will render large areas uninhabitable, causing huge social migrations and (for some) increased poverty.”

An associate professor of computer science at a U.S. university commented, “Humans have adapted poorly to life in a technological society. Think of obesity, time wasted on low-quality entertainments, addictions to a whole range of drugs and more. As the noise in the information stream increases, so does the difficulty for the average person to extract a cohesive life pattern and avoid the land mines of dangerous or unhealthy behaviors. Genetics, cultural change, social and legal structures do not change exponentially, but aggregate knowledge does. This mismatch is a crucial realization. As Reginald Bretnor noted in ‘Decisive Warfare,’ kill ratios for weapons not only increase, but so does their ability to be wielded by the individual. So it is with most things in a technologically advanced society. But have people cultivated the requisite wisdom to use what is available to better themselves? Looking at American society, I would generally conclude not.”

The chief marketing officer for a technology-based company said, “I am all-in for innovation and improving the standard of living for all humanity. However … we need to become more vigilant about our fascination with technology and self-indulgence. Yes, it does paint a darker picture and forces a more cautious approach, but some of us are required to do this for the sake of a more balanced and fair future for all humanity. I’m one of the lucky ones, born in Europe with a very high standard of living. Same goes for the people behind this research. Let’s be vigilant of our actions and how we shape the future. We have been in a constant battle with nature and resources for the past 100 years. In historical terms it was a momentous leap forward in education, connectivity, traveling, efficiency, etc. But, at the same time, we are all committing an environmental suicide and behave like there is no tomorrow – only the instant pleasure of technology. There will not be a tomorrow if we continue to ignore the cause and effect of our unipolar obsession with technology and self-indulgence.”

Miguel Moreno-Muñoz , a professor of philosophy specializing in ethics, epistemology and technology at the University of Granada, Spain, said, “Mobility and easy access to affordable databases and service platforms for most citizens will become more important; e-government systems, transparency and accountability will be improved. The development of certain applications, if paralleled by the development of new types of intellectual property licensing and management systems, can revolutionize education and access to knowledge and culture. But this requires an open framework for international cooperation, which in many ways is now under threat.”

Sam Gregory , director of WITNESS and digital human rights activist, responded, “My perspective comes from considering the internet and civic activism. We are at a turning point in terms of whether the internet enables a greater diversity of civic voices, organizing and perspectives, or whether it is largely a controlled and monitored surveillance machine. We are also swiftly moving toward a world of pervasive and persistent witnessing where everything is instantly watched and seen with ubiquitous cameras embedded in our environment and within our personal technologies, and where we are able to engage with these realities via telepresence, co-presence and vicarious virtual experience. This is a double-edged sword. The rise of telepresence robots will enable us to experience realities we could never otherwise physically experience. This remote experiencing has the potential to enable the best and the worst in our natures. On the one hand, we will increasingly have the ability to deliberately turn away from experiencing the unmitigated pain of the world’s suffering. We might do this for the best of reasons – to protect our capacity to keep feeling empathy closer to home and to exercise what is termed ‘empathy avoidance,’ a psychological defense mechanism which involves walling ourselves up from responding emotionally to the suffering of others. We may also enter the middle ground that Aldous Huxley captured in ‘Brave New World,’ where narcotizing multisensory experiences, ‘feelies,’ distract and amuse rather than engage people with the world. Here, by enabling people to experience multiple dimensions of others’ crises viscerally but not meaningfully, we perpetuate existing tendencies in activism to view other people’s suffering as a theatrum mundi played out for our vicarious tears shed in the safety of our physically walled-off and secure spaces. On the other hand, we will increasingly be presented with opportunities through these technologies to directly engage with and act upon issues that we care about. As we look at the future of organizing and the need to better support on-the-ground activism, this becomes critical to consider how to optimize. We also have a potential future where governments will thoroughly co-opt these shared virtual/physical spaces, turning virtual activism into a government-co-opted ‘ Pokémon Go ,’ a human-identity search engine, scouring virtual and physical spaces in search of dissidents. In a brighter future, virtual/physical co-presence has the exciting potential to be a massive amplifier of civic solidarity across geographical boundaries, defying the power of national governments to unjustly dictate to their citizens.”

Marc Rotenberg , director of a major digital civil rights organization, commented, “There is no question that the internet has transformed society. We live in a world today far more interconnected than in the past. And we have access almost instantaneously to a vast range of information and services. But the transformation has not been without cost. Concentrations of wealth have increased. Labor markets have been torn apart. Journalism is on the decline, and democratic institutions are under attack. And there is a growing willingness to sacrifice the free will of humanity for the algorithms of machine. I do not know if we will survive the next 50 years unless we are able to maintain control of our destinies.”

Adam Popescu , a writer who contributes frequently to the New York Times, Washington Post, Bloomberg Businessweek, Vanity Fair and the BBC, wrote, “Either we’ll be in space by then, or back in the trees. Pandora’s box may finally burn us. No one knows what will happen in five years, let alone 50. It’s now obvious that the optimism with which we ran headfirst into the web was a mistake. The dark side of the web has emerged, and it’s come bringing the all-too-human conditions the web’s wunderkinds claimed they would stamp out. Given the direction in the last five years, the weaponization of the web, it will go more and more in this direction, which ultimately means regulation and serious change from what it is now. Maybe we won’t be on the web at all in that period – it will probably be far more integrated into our day-to-day lives. It’s a science fiction film in waiting. With email, constant-on schedules and a death of social manners, I believe we have reached, or are close to, our limit for technological capacity. Our addictions to our smartphones have sired a generation that is afraid of face-to-face interaction and is suffering in many ways psychologically and socially and even physically in ways that we’ve yet to fully comprehend. This will impact society, not for the better. Manners, mood, memory, basic quality of life – they’re all affected negatively.”

Policy changes today will lay the foundation of the internet of tomorrow

Many respondents to this canvassing described the next several years as a pivotal time for government regulation, adjustments in technology company policies and other reforms. They say such decisions being made in the next few years are likely to set the course for digital life over the next half century. Some warn that regulation can be more harmful than helpful if its potential effects are not carefully pre-assessed.

Mark Surman , executive director of the Mozilla Foundation, responded, “I see two paths over the next 50 years. On the first path, power continues to consolidate in the hands a few companies and countries. The world ends up balkanized, organized into blocks, and societies are highly controlled and unequal. On the other path, we recognize that the current consolidation of power around a few platforms threatens the open global order we’ve built, and we enact laws and build technology that promotes continued competition, innovation and diversity.”

Laurie Orlov , principal analyst at Aging in Place Technology Watch, wrote, “The internet, so cool at the beginning, so destructive later, is like the introduction of the wheel – it is a basis and foundation for the good, the bad and the ugly. As the wheel preceded the interstate highway system, so the internet has become the information highway system. And, just like roads, it will require more standards, controls and oversight than it has today.”

Juan Ortiz Freuler and Nnenna Nwakanma of the Web Foundation wrote, “Allowing people to increasingly spend time in digital environments can limit unexpected social encounters, which are key to the development of empathy and the strengthening of the social fibres. In a similar way that gentrification of physical neighborhoods often creates barriers for people to understand the needs and wants of others, digital environments can thicken the contours of these bubbles in which different social groups inhabit. In parallel, this process enables a great degree of power to be amassed by the actors that design and control these virtual environments. Whereas in the past there was concern with the power of media framing, in the future the new brokers of information will have more control over the information people receive and receive a steady stream of data regarding how individuals react to these stimuli. It is becoming urgent to develop processes to ensure these actors operate in a transparent way. This includes the values they promote are in line with those of the communities they serve and enabling effective control by individuals over how these systems operate. Government needs to update the institutions of democracy if it wants to remain relevant.”

Leonardo Trujillo , a research professor in computing sciences at the Instituto Tecnológico de Tijuana, Mexico, responded, “I am worried that the digital ecosystems being developed today will limit people’s access to information, increase surveillance and propaganda, and push toward limiting social interactions and organization, particularly if current policy trends continue.”

Joly MacFie , president of the Internet Society’s New York Chapter, commented, “Today will be seen as an inflection point – the end on the initial ‘open’ era, and the start of the second.”

A professional working on the setting of web standards wrote, “Looking ahead 50 years, I expect that AI will either be more evenly and equitably integrated throughout societies, or that there will have been AI-driven disasters that jeopardize human and other animal life, or may have already destroyed life. On the more positive side, and focusing on medical research, I would expect AI-driven research and simulation of artificial life including cognition would have provided the tools to cure most disease, as well as to advance human capabilities through bionic augmentation. On the negative side, I would expect that AI combined with rapidly increasing capabilities of bioengineering, and with persistent socio-pathological tendencies of a small minority of the population, could have led to uncontained AI-driven cyberwarfare or biological devastation. A key determining factor differentiating these two futures might be the magnitude of social investment in a robust ethical framework for AI applications, and continued emphasis on development of a just society, with social safety nets, to help mitigate the risks of development of sociopathic behaviors that would be especially dangerous with easy access to AI.”

Benjamin Shestakofsky , an assistant professor of sociology at the University of Pennsylvania specializing in digital technology’s impacts on work, said, “1) The ‘Uber-ization’ of everything will not proceed as rapidly, nor as evenly, as many now predict. Platform companies that facilitate the exchange of goods and services will continue to confront the reality that funneling idiosyncratic human activity through digital platforms is a complicated and costly endeavor. 2) Employers will continue to increase their use of connected technologies to monitor their workforces. However, workers will also continue to find ways to subvert employer surveillance and control. In many workplaces, employers will find it difficult to convert big data about employee activities into actionable insights. Nonetheless, legislators should act to limit the scope of employee surveillance and threats to employees’ privacy.”

A professor of information science wrote, “When I’m feeling dystopian, I see a world that looks a little too much like ‘Mr. Robot’ or ‘Person of Interest,’ with government or private organizations knowing too much about us and having too much control over us. I’d like to believe that interconnectivity could, instead, provide us with more ubiquitous access to information and with the ability to establish connections and deliver services across space and time.”

Stephen McDowell , a professor of communication at Florida State University expert in new media and internet governance, commented, “The area of law and policy is already showing some major stresses in dealing with networked connected data systems, apart from AI systems. Law and policy is often dealt with on a case-by-case and issue-by-issue basis, treating questions and legal traditions and precedents in isolation. These issues might include speech, privacy, property, informed consent, competition and security. This has weaknesses already in a networked world where large teach firms offer platforms supporting a wide range of services and track user behavior across services…. If we add systems with more learning and predictive power to this mix, it will be important to develop new concepts that go beyond the segmented approach to law and policy we are trying to use to govern internet-based interactions presently. We need to grapple with the totality of a relationship between a user and a service provider, rather than react to isolated incidents and infringements. We need to address the trade-off between offering free services and users allowing data to be collected with minimal understanding of their consent. We should also consider stronger limits on the use of personal data in machine learning and predictive modeling. Companies that automate functions to save on input costs and to allow services to be offered at scale to reap the private benefits of innovation must also take on responsibility for unintended consequences and possibilities they have created.”

Toby Walsh , a professor of AI at the University of New South Wales, Australia, said, “Like the Industrial Revolution before it, the Internet Revolution will be seen to have improved people’s social, economic and political lives, but only after regulation and controls were introduced to guard against the risks.”

Jonathan Taplin , director emeritus at the University of Southern California’s Annenberg Innovation Lab, wrote, “The answer to this question depends totally on the willingness of regulators and politicians to rethink their ideas about antitrust policies in the digital age. If current consumer welfare standards continue to be used, the existing internet monopolies (Facebook, Google and Amazon) will get more dominant in the AI age. They would be bigger and have more data than any government or other mediating institution. They would be beyond control. They would determine our future and politics would be of little use…. I can envision a world in which technology is a boon to human progress, but it cannot come about as long as the internet is dominated worldwide by three firms (with two Chinese competitors in Asia). It is possible that the current efforts around blockchain or the new work of Tim Berners-Lee may lead to a more decentralized web. Count me as skeptical.”

Doug Schepers , chief technologist at Fizz Studio, said “The technology is less important than the laws, policies and social norms that we as a society will adopt to adapt to it.”

Randy Goebel , professor of computing science and developer of the University of Alberta’s partnership with DeepMind, wrote, “A challenge for an increasingly connected and informed world is that of distinguishing aggregate from individual. ‘For the greater good’ requires an ever-evolving notion and consensus about what the ‘greater’ is. Just like seat belt laws are motivated by a complex balance of public good (property and human costs) we will have to evolve a planet-wide consensus on what is appropriate for ‘great’ good.”

William Dutton , professor of media and information policy at Michigan State University, commented, “We are still in a transitional period, when so much of our time and effort is focused on getting connected and using technical advances. I could imagine so many devices that complicate contemporary life, such as the mobile smartphone, disappearing as they become unnecessary for accomplishing their functions. That said, the future will depend heavily on wise policy responses, even more so than technical advances.”

Luis Pereira , associate professor of electronics and nanotechnologies, Universidade Nova de Lisboa, Portugal, responded, “By virtue of the interconnection of the new tools there will be widespread data collection on people, their activities, connections, the environment and the Internet of Things. There will be increased promotion of gig-economy platforms and the focused targeting of individuals with consumerism and ideology. Unless moral values and ethical rules are put in place for application designers, product sellers, data users and autonomous software and robots, people will be forced into cluster drawers. A competitive and increasing AI race for control of profits and policies will sprout, including a digital weapons race, unless a way is found to promote collaboration instead, on the basis of regulated and overseen commitments (similar to global climate agreements) for the benefit of humanity and the planet. Certification methods for software that complies with such commitments need to be developed. People will be teaching machines how to replace themselves and others at increasing levels of cognition. Security will be a major concern. Technological developments will surpass human adaptability and raise issues we do not have the wherewithal to comprehend or address.”

Hari Shanker Sharma , an expert in nanotechnology and neurobiology at Uppsala University, Sweden, said, “Technology is a tool for making life better. A goal of life is happiness, satisfaction. Both require a set of values to remain good or become evil. The internet has brought the world together. Apps are tools to perform tasks easily. The Internet of Things will connect all living and nonliving things. But the dark side of human nature – the hunger for power, possession and control that has brought wars and terrorism – cannot be corrected by the internet or apps. There is a need to identify the evil in human nature and protect the simple, good and well-meaning from becoming its prey. Evil often moves ahead of good. Perhaps it can be predicted by features that check the psychology of individuals, crime records and other past behaviors to block certain actions or warn others. Biometric identification is already used for e-security – for instance, facial recognition – and it might be possible to have bio-feature readers to detect the evil-minded or those who are likely to become evil-minded and put safety checks in place at places of danger. Expert systems for face reading, feature reading, nature reading and analysis might give warning. Trackers could be established for isolated nodes and feed details to law-enforcement agencies. No evil-monger would agree on such checks and caution, but people need to be protected from online financial fraud, rapes by social media stalkers, murders by e-system users, etc., that unchecked because no efficient warning system exists. The law today is not helpful. E-crime should be dealt with and punished without boundary. The internet needs global law and global governance to become user friendly. Global connectivity becomes a tool of criminals while those who are simply good have no power to handle evil.”

Amy Webb , founder of the Future Today Institute and professor of strategic foresight at New York University, commented, “I hope historians’ verdict 50 years from now will be that we made the right choice in the years 2018-2020 to rethink access to the internet, data ownership and algorithmic transparency, thus setting all of humanity on a better course for the future.”

A director for an internet registry responded, “There will be ongoing radical development by which biology, at physical and molecular/genetic scales, will become integrated with digital technology. We can assume that this will be pervasive throughout society, but both the applications and the costs and conditions under which they may be accessed are unpredictable. The greatest determining factor in the overall result will be political rather than technological, with a range of outcomes between utopian and utterly dystopian.”

Andrea Romaoli Garcia , an international lawyer active in internet governance discussions, commented, “The cloud is a new world and is navigating in international waters. And because it is new, laws must follow the innovation. However, I have watched all countries make laws with their minds focused on traditional models of regulation. This is wrong. Laws must be international. The interpretation of the innovation scenario should be applied by introductory vehicles of new laws. The word ‘disruptive’ must be interpreted to apply to new laws. When we use old models of laws and only we are doing changes to force fit into the new model of doing business or everyday life, we are creating a crippled creature that moves in a disgusting way. I nominated this as a ‘jurisdicial Frankenstein.’ This means laws that will apply to the cloud environment but will never be perfect, and legal security will be threatened.”

Stuart A. Umpleby , a professor and director of the research program in social and organizational learning at George Washington University, wrote, “The Congressional Office of Technology Assessment was eliminated by Newt Gingrich in order to put companies, rather than Congress, in charge of technology. Given unrestrained advancements in digital and biological technology, we now need such an office more than ever.”

Divina Frau-Meigs , professor of media sociology at Sorbonne Nouvelle University, France, and UNESCO chair for sustainable digital development, responded, “Currently there is no governance of the internet proper. Cases like Cambridge Analytica are going to become more and more common. They will reveal that the internet cannot be entrusted uniquely to monopoly corporations and their leaders who are not willing to consider the unintended consequences of their decisions, which are mostly market-competition-driven). A global internet governance system needs to be devised, with multi-stakeholder mechanisms, that include the voices of the public. It should incorporate agile consultations on many topics so that individuals can have an influence over how their digital presence can affect, or not, their real life.”

Jennifer J. Snow , an innovation officer with the U.S. Air Force, wrote, “The internet will continue to evolve in surprising ways. New forms of governance, finance and religion will spring up that transcend physical Westphalian boundaries and will pose challenges to existing state-based governance structures. The internet will fracture again as those founders who seek to return it to its original positive uses establish and control their own ‘walled gardens,’ inviting in only a select few to join them and controlling specific portions of the Net separately from nation-states. New policy and regulations will be required to address these changes and the challenges that come with them. New types of warfare will arise from internet evolutions but also new opportunities to move society forward together in a positive manner. States will no longer have the premium on power and nonstate actors, corporations and groups will be able to wield power at the state, national and regional level in new and unexpected ways. It will be a disruptive time and dangerous if not navigated smartly but may also result in some of the greatest advances yet for humanity.”

Peng Hwa Ang , professor of communications at Nanyang Technological University and author of “Ordering Chaos: Regulating the Internet,” commented, “We know that the future is not linear, which means that to be accurate I will be painting with broad brush strokes. 1) Laws – It is finally being recognized that laws are essential for the smooth functioning of the internet. This is a sea change from the time when the internet was introduced to the public more than 20 years ago. In the future, governments will be increasingly feeling empowered to regulate the laws to their own political, cultural, social and economic ends. That is, countries will regulate the internet in ways that express their own sovereignty. There will be a large area of commonality. But there will also be a sizable area where the laws diverge across borders. 2) Within 50 years, there should be one common trade agreement for the digital economy. It is difficult to see China carrying on its own terms. Instead, it is more likely that China will allow foreign companies to operate with little censorship provided that these companies do not ‘intrude’ into the political arena. 3) It is difficult to see Facebook continuing to exist in 50 years. 4) The harm from being always on will be recognized, and so users will spend less time online. Some of the time currently spent by users will be taken over by AI bots.”

Devin Fidler , futurist and founder of Rethinkery Labs, commented, “Over the last 50 years we have built a basic nervous system. Now, the challenge is to evolve it to best support human society. A great place to start is with the many positive and negative externalities that have been documented around network deployment. Simply amplifying the positive benefits to society for network activity and curbing network activities that impose an unfunded burden on society as a whole may be a great framework for creating a networked society that lives up to the enormous potential these tools unlock. Expect increased regulation worldwide as societies struggle to balance this equation in different ways.”

David A. Banks , an associate research analyst with the Social Science Research Council, said, “The character and functionality of the internet will continue to follow the political and social whims of the major power players in the industry. If these companies continue to engage in monopolistic practices without competent and reflective regulation, then we can expect an ossified and highly commercialized digital network. If something major changes then we can expect something radically different.”

Luis German Rodriguez Leal , teacher and researcher at the Universidad Central de Venezuela and consultant on technology for development, said, “The new internet will be blended with human-machine interfaces, AI, blockchain, big data, mobile platforms and data visualization as main-driven technologies. They will set up a robust and widely accessible Internet of Things. On the other hand, these will imply a disruptive way of facing everyday activities such as education, government, health, business or entertainment, among many others. Therefore, innovative regulation frameworks are urgently required for each of them.”

Julian Jones , a respondent who provided no identifying details, said, “Data security will be vital as is privacy. It is essential that individuals can have more control over the context in which their data is used. In the absence of this legislation the consequences for society could be catastrophic.”

[Federal Communications Commission]

Jennifer Jarratt , owner of Leading Futurists consultancy, commented, “We need new regulation now that can protect users and the digital world from themselves and itself. With those we could also have a fully digital government that might be able to handle some of the planet’s big problems. Expect also new activism and new social orders. In the next 50 years, technological change will produce significant change – but maybe not as much as we expect or would like. The world will have become more difficult to live in by then, so we’d better hope tech has some answers.”

Oscar Gandy , emeritus professor of communication at the University of Pennsylvania, responded, “The whole notion of connectivity is bound to be redefined in the not-too-distant future. When we extend the processes through which miniaturization married with processing speed, and divorce from personal device-based memory, the possibilities for connectivity/interactivity/control, and what we mean by intelligence are beyond the ability of any but authors of science fiction novels (I guess that excludes those among us who consider themselves to be ‘futurists’). I think the most interesting possibilities are those that actually eliminate (or seem to eliminate) the need to possess devices to make use of what we currently refer to as connectivity. This means that all we need is access to the intelligent network – a level of access that will not require manual action of any kind; I can even imagine that use of this network will not even depend upon requests made vocally – thought will be enough. So, I don’t know what the requisite ‘interface’ will be, but I believe that something akin to sensors interacting with implanted chips will be commonplace, without the chips, with sensing of the brain from what we would characterize as a reasonable conversational distance from the sensor(s) would be sufficient. Of course, for a privacy scholar, this is quite a leap from our present thinking about access to and control over our private thoughts. This will, therefore, be an area of much work with regard to law, regulation and control of these developments and their use by others for specified legitimate purposes.”

Jennifer King , director of privacy at Stanford Law School’s Center for Internet and Society, said, “The last 10 years have demonstrated the risks with unleashing the internet on society with little accounting for public responsibility. I predict in Western democracies, we will see a greater push for more regulation and corporate responsibility for the effects of technology. In totalitarian states, we will see concentrated social control through technology. And across the board, I suspect it will become increasingly difficult to live a life outside of the reach of technology.”

Tracey P. Lauriault , assistant professor of critical media and big data in the School of Journalism and Communication at Carleton University, commented, “We are already seeing platform convergence and the resale of platform data to third parties with whom we do not have a direct relationship. We already know that data brokerage firms are not regulated and there is very little regulation when it comes to credit scoring companies. In addition, we are already beginning to see erroneous social science hiding behind algorithms, not unlike what we saw at the beginning of the Enlightenment, and we have not even begun to address the social-technical and political outcomes of junk AI/social sciences (i.e., finding gay people or criminals in facial recognition – harkening on the bad old days of eugenics and skull measuring). The European Union’s General Data-Protection Regulation on the right to access information will help, but, for the moment, there is little individual and aggregate protection. Also, will private sector companies who aggregate, buy and sell our data, who create individual data shadows or data doppelgangers that become our representatives in this data world, know more about us than we know about ourselves? What influence will they have on larger political decision-making? Decision-making over our lives? How do we correct these systems when they are wrong? How do we adjudicate and context egregious ‘data-based decisions’ in the courts with current intellectual property law? And what of personal sovereignty and state sovereignty? What of other decision-making systems such as social scores in China? How with the poor, elderly and disabled be protected from automated decision-making about social welfare and supports if they do not have assurances that the decision-making about them are correct? And what of junk coding that persists and does not get removed and just keeps generating bad decisions? Who audits? Who is accountable? And will these become the new governors? The future is here and we do not know how to deal with it. The EU is beginning to address these and holding these companies to account, but our citizens in North America are not as well versed, and arguably, our governors seem generally less interested in our well-being, or perhaps are more ignorant of the implications.”

Andreas Kirsch , a fellow at Newspeak House, formerly with Google and DeepMind in Zurich and London, wrote, “Regulation will force open closed platforms. Information will flow more freely between services. Internet services will become more decentralized again as network bandwidths will not be sufficient for the data volumes that users will produce by then. Applications and services will not be coupled to devices anymore but will follow us freely between different contexts (shared car, home, work, mobile devices).”

Anonymous respondents said:

  • “It is not about the technology itself … it is about the lack of regulation by the institutions and their lack of understanding of the general public.”
  • “ With each advance there are concerns about privacy and political abuses and these will need to be addressed with technology and with innovation in policy and laws.”
  • “The executives of Facebook will be indicted and their trial will begin the process of reform. Once we get over the idea that tech executives can commit heinous crimes and we hold them accountable, the tech world will begin the process of change.”

Internet everywhere, like the air you breathe

When asked to look ahead to 2069, respondents largely agreed that connectivity will be both more pervasive and less visible. A large share predicted that humans and networked devices will communicate seamlessly and the concept of “going online” will seem archaic. They anticipate that the internet will “exist everywhere,” turning planet Earth into a cybersphere where connectivity is as natural as breathing.

Alf Rehn , a professor of innovation, design and management in the school of engineering at the University of Southern Denmark, commented, “The curious thing will in all likelihood be how unaware we’ll be of the internet in 50 years. Today, the only time we really reflect on electricity and plumbing is when they break down. At other times, they’re just there, as self-evident as air. I believe we will look to digital tools in much the same way. We walk into a room and turn on our digital streams much like we turn on a light. We wonder how much money is in our bank account, and just ask the air, and the wall replies (‘You’re slightly overdrawn. Shouldn’t have bought those shoes. I told you.’). We start cooking, and our kitchen gently suggests we stop doing the Thai fish stew, because we forgot to tell the kitchen we wanted to do that, so it hasn’t ordered fresh lemongrass. We’ll do a Mediterranean trout dish instead. The only time we reflect over any of this is when the Net, for whatever reason, cuts out. It usually lasts only a few minutes, but for those minutes we become like children, stumbling around unsure what to do when not surrounded by endlessly helpful technology.”

Scott Burleigh , software engineer and intergalactic internet pioneer, wrote, “Machine-to-machine network communications will become ubiquitous, and computing hardware will have access to all human information; to the extent that hardware becomes intelligent and volitional it will replace humans in essentially all spheres. Humans’ ability to benefit from this advance will be limited mainly by our inability to come up with adequate interfaces – graphical user interfaces are a dead end, voice is simply annoying and nobody types fast enough. The hardware will know everything and won’t be able to convey it to us.”

Adam Powell , senior fellow at the University of Southern California Annenberg Center on Communication Leadership and Policy, wrote, “Predicting 50 years out is inherently risky (see all of those flying cars overhead?). But, barring a catastrophe – epidemic, war – extrapolating from recent history suggests the internet will become more pervasive, more powerful and less expensive. Think of electricity, or electric motors; they are ubiquitous, noticed mainly when they cease to function.”

John Laird , a professor of computer science and engineering at the University of Michigan, responded, “The internet infrastructure will disappear from public view. It will be ubiquitous, always on, always available and invisible. Access will be worldwide. What will change will be our means of interacting with it. Augmented reality will be ubiquitous (much sooner than 50 years), with essentially everything interconnected, including the human body – and possibly the human mind. There are many risks, and many ways in which ubiquitous connectivity can and will be abused, but overall, it will enhance people’s lives. We will go through ups and downs, but there will be significant advances in security.”

A senior data analyst and systems specialist expert in complex networks responded, “This is an area where I think a few science fiction writers, such as John Brunner , have seen the future. The future version of the internet will be more ubiquitous and more seamless (building on the Internet of Things), but it will also be much less secure, with people suffering damage from various kinds of hacking on a daily basis. However, this lack of security will gradually become the ‘new normal,’ and the outrage will fade.”

Nigel Hickson , an expert on technology policy development for ICANN based in Brussels, responded, “I do not think we will be talking about the internet in 50 years’ time. As the internet becomes ubiquitous it is simply an enabling force like air or water; it’s what we do with it that becomes more important – is the power used for good, to improve society, enhance freedom and choice, or is it used to enslave? The internet cannot be divorced from the progress of society itself. In an enlightened democracy the effect of the internet will have been positive, enhancing freedom and choice, but in a dictatorship the opposite could well be true.”

From the ‘Internet of Things’ to the ‘Internet of Everything’

In 1982, graduate students in the computer science department at Carnegie Mellon University connected a Coke vending machine to the ARPANET, creating the first “smart device.” The rise of networked devices, collectively known as the “Internet of Things,” was a dominant theme in the 2014 Pew Research Center-Imagining the Internet report on the Impacts of the Internet by 2025 . When asked four years later to look ahead to 2069, these expert respondents predicted the further rise of networked devices and extended the concept to include the technical hybridization of the natural world.

Edson Prestes , a professor and director of robotics at the Federal University of Rio Grande do Sul, Brazil, wrote, “I believe the internet will no longer exist in the way we see today. It will not be possible to see the internet as a huge network of connected devices, but instead it will be something unique that works in a pervasive and transparent way – like air that exists everywhere so we forget about its existence. We will use the environment to transmit information, via plants, soil, water, etc. We will develop new processes to take advantage of all resources available in the environment, e.g., we might use biochemical processes of plants to give support to data processing. Humans will be naturally adaptable to this pervasive environment. Some people will use prostheses to get/transmit/visualize and process information, maybe plugged directly in the brain and working in unison with the brain lobes. The information received from the environment can be seen as a ‘new sensory input.’ Thus, all interfaces and tools will be totally reshaped: no mouse, no menus, no ‘blue screens of death.’ Others, from the ‘old school,’ will use plug-and-play wearable gadgets.”

Valarie Bell , a computational social scientist at the University of North Texas, commented, “In the coming decades, we’ll have one ‘device’ if any at all. Everything will be voice-print-activated and/or bio-scanner-activated (retinal scan) so passwords and login details become irrelevant. This will make identity theft more difficult but not impossible, as no matter what system or technology people create, other people will immediately develop ways to deviate or breach it. All domiciles’ powered devices will likely be solar-powered or powered in a way other than 20th century electricity. Personal credit cards, driver’s licenses and other portable documentation that you’d carry in your wallet would become synced to a single cloud-based account accessible via bio-scannable systems. To buy groceries, simply use your home grocery ‘app’ to open your account as your pantry, freezer, and fridge order what you’re out of. Then robots will pack your order and self-driving cars with robot delivery staff will restock your kitchen. Later, groceries will appear in your kitchen in much the same way Capt. Kirk and Mr. Spock used to beam up to the Enterprise on ‘Star Trek.’ Instead of you teaching your young children to read, tie their shoes, do their homework or clean their room, aids like Alexa that are more developed and can operate in multiple rooms of the house will do those things. People continue to abdicate their duties and responsibilities to devices and machines as we’ve become more selfish and self-obsessed. Social networking sites like Facebook will be holographic. People will likely have one or more implants to allow them to access the internet and to access whatever the future computer will be. People won’t type on computers. Perhaps you’ll be able to think what you want to type and your system will type it for you while you eat lunch, watch TV, walk in the park or ride in your self-driving car. It’s also important to remember that past projections from 50 years ago never predicted the internet but did predict lots of technology that even now we still don’t have. So we can expect the same with our predictions.”

Stephen Abram , principal at Lighthouse Consulting Inc., wrote, “We will be well beyond apps and the web in 50 years. The networked information, entertainment and social world will be fully integrated into biology and networked appliances (not toasters but a full range of new appliances that may be stand-alone like Google Home but are more likely fully integrated devices into architecture and spaces).”

Lee McKnight , associate professor, School of Information Studies, Syracuse University, commented, “The internet will reach close to 100% of humans, forests, fields and streams, as well as most non-human species, in 2069. The Internet of Things will grow to trillions of things – and all factories, cities and communities…. I do expect pop-up networks will permit people even in the most remote locations, or communities with limited means, to access and share services and internet bandwidth from literally anywhere on this planet, as well as from our Mars colonies and moon bases. What, you thought there would be just one? Forecasting the way we interact with software and hardware is too limited a starting point, as we must assume biochemistry (wetware) will also increasingly take its place in human-machine interaction environments and platforms. While science fiction is comfortable imagining all kinds of scenarios, the future-realist in me can only see good, bad and ugly wetware interacting with all of us, at all times, in 2069.”

Mícheál Ó Foghlú , engineering director and DevOps Code Pillar at Google, Munich, said, “Looking forward 50 years is almost impossible. I think the biggest trend we can anticipate from today’s frame is that the huge increase in machine-to-machine intercommunication, the Internet of Things, will transform the landscape. This will mean all electronic devices will have some form of built-in intelligence and many systems will layer on top of this massively interconnected intelligent mesh.”

Peter Eachus , director of psychology and public health at the University of Salford, UK, responded, “The most fundamental change will be the way in which we interact with this connected technology. There won’t be tablets or smartphones or screens. We will be able to just think of a question and the answer will immediately come to mind! The Mindternet is the future!”

A professor and director at a major U.S. university said, “While the Internet of Things will be touted as time-saving and labor-saving it will present additional challenges due to distraction and reduce the quality of intrapersonal relations in addition to adding security vulnerabilities.”

Additional anonymous respondents said they expect:

  • “We will be much less aware of the internet because it will be mostly seamlessly woven into our everyday lives.”
  • “A total integration of human inputs (perceptions) and outputs (actions) with the internet and the objects and tools around them.”
  • “Free internet access worldwide will be regarded as a basic human right.”
  • “People will be seamlessly and continuously interconnected without having to use a device of any kind.”
  • “Everything will be stored in cloud storage. Sensors will be everywhere, from parking lots to agricultural fields.”
  • “More and more of our spheres – even our bodies – will be more and more integrated into the network.”
  • “There will be a cashless society. E-shopping will dominate people’s lives. The Internet of Things will become a part of us – embedded, for instance, in clothes, thermoses, heating systems, etc.”
  • “Due to the lack of transparency and understanding of algorithmic systems and their owners, humans’ individual autonomy and agency is going to decrease.”
  • “More connected objects and connected experiences will allow to get over the digital divide and allow everyone to profit from the digital lifestyle. At the same time advances in green tech will also allow the connectivity not to be made at the expense of the environment.”
  • “Your report card could be connected to, say, a restaurant’s app which will make reservations for you when you get good grades.”

It will be impossible to unplug

A share of respondents explored the possibilities and challenges of living in a fully networked world where it is difficult, or even impossible, to disconnect. The following comments illuminate some of their expectations in the future of constantly connected life.

Steven Polunsky , director of the Alabama Transportation Policy Research Center, University of Alabama, said, “We all know where this is going. We are at the earliest stages of making devices like electric and water meters ‘smart’ and integrating home accessories with internet functionality. The issue is whether people will be allowed, by regulation or by practical exercise, to opt out, and what the effects of that action will be, as well as what efforts will be required to bring services to those at the fringes. Does government have an obligation, such as led to the creation of the Rural Electrification Administration or Essential Air Service, that extends to the requirement or provision of broadband and beyond to the services it enables?”

Helena Draganik , a professor at the University of Gdansk, Poland, responded, “The rules/law of internet communication will be unified between many countries, which will limit the freedom of expression. There will grow to be even more dependence upon big platforms (e.g., Facebook) and a deepening of the monetization of our customs and habits. The marketing industry will grow. The internet will just be one more, marketing-dependent medium – as press or TV. Yes, in the future there will be many information technology and artificial intelligence applications and commodities to simplify our lives. But it is possible that we will not be able to function properly without them.”

An expert on converging technologies at for a defense institute wrote, “The internet 50 years from now will look nothing like it does today. Physical infrastructure will be entirely pervasive and wireless (perhaps non-electronic) and digital elements will be directly interfaced with human brains. And the minds of different individuals may be directly linked. This will be a new era for humankind, which is difficult to hypothesize about.”

Christopher Yoo , a professor of law, communication and computer and information science at the University of Pennsylvania Law School, responded, “If I had to predict (and undertake the concomitant risk and inevitable likelihood that some of these predictions will turn out to be wrong), I would expect more users to become increasingly reliant on their mobile devices and to rely on them for mobile payments and other functions. Just as cloud computing disintermediated PC operating systems and created new key intermediaries, such as hypervisor leader VMWare, these new functions will shake up existing industries and inevitably displace incumbents that are too slow to innovate.”

Nancy Greenwald , a respondent who provided no identifying details, wrote, “I started on the early internet in 1983-84 on ‘Dialog,’ with a dial-up connection. Now I talk with my devices, giving instructions, dictating, etc. What I expect to see is a growing number of tasks we can complete through the internet, continual increases in collaborative platforms with an increase in a greatly improved ‘open API’ type of program integration, and an increase in the ways we connect with the technology (our wearable technology is crude) so that we are continuously connected. I already have the feeling that one of my senses is cut off when I am unable to connect to the internet. I expect that sense of enabling/dependence to increase.”

A well-known writer and editor who documented the early boom of the internet in the 1990s wrote, “We will take omniscience over the state of the world for granted because we will be connected to everything, always. We are therefore all the more likely to be distracted from asking questions that really matter. On balance, greater knowledge leads to greater happiness – though there is a lot of distraction to get through along the way.”

A professor of electronic engineering and innovation studies who is based in Europe commented, “A radical change will occur in the way the people see human-machine and human-human interactions. Humans will be entirely dependent on information systems, just like our generation got used to being dependent on electricity or transport systems. Also, expect radical innovations in neural connection (i.e., human brains integrated with computers). The effects of this remain highly unpredictable.”

Slowing the pace of internet innovation

Although a significant majority of survey respondents expected the rate of technological advancement to remain steady or increase in the next 50 years, a vocal minority argued that humanity may be entering a cooling-off period when it comes to digital evolution.

Lee Smolin , a professor at Perimeter Institute for Theoretical Physics and Edge.org contributor, responded, “Many technologies evolve fast until they reach functional maturity, after which how they function for us evolves slowly. I suspect the internet has already reached, or will shortly reach, that state.”

Ken Birman , a professor in the department of computer science at Cornell University, responded, “Technology booms take the form of ‘S’ curves. For any technical area, we see a slow uptake, then a kind of exponential in which the limits seem infinite, but by then things are often already slowing down. For me, the current boom in cloud computing has created the illusion of unbounded technical expansion in certain domains, but in fact we may quickly reach a kind of steady state. By 2050, I think the focus will have shifted to robotics in agriculture and perhaps climate control, space engineering, revolutionary progress in brain science and other biological sciences. This is not to say that we will cease to see stunning progress in the internet and cloud, but rather that the revolutions we are experiencing today will have matured and yielded to other revolutions in new dimensions they will surely leverage the network, but may no longer be quite so network-centric.”

Zoetanya Sujon , a senior lecturer specializing in digital culture at University of Arts London, commented, “Based on the cyclical histories of the printing press, telephone, internet, virtual reality and artificial intelligence, I believe that all technologies are subject to waves, often characterized by ferment/early development, great claims and excitement whether positive or negative, and if they reach the mainstream, they will also experience an era of maturity marked by institutionalization and ‘an era of dominant design.’ After this point, technologies are likely to become obsolete, adapt or converge, or follow through incremental change – all rather like knowledge and product cycles.”

A lead QA engineer at a technology group said, “Twenty years ago someone told me that in the future all of our applications and data would be online. I did not believe it … and here we are today. The advances in technology are based on continued availability of electricity that makes technology and connectivity possible. I have a feeling that while many advances are made, some in our society will want to separate themselves. Like in the 1950s the big thing was canned goods, instant meals, and now 50 years later many are going back to cooking from scratch.”

An internet pioneer wrote , “If history is a guide, the 10 most valuable companies in the world will be different 50 years from now than they are today. These new players will have succeeded in re-centralizing something that earlier generations had de-centralized. Perhaps we return to desktop/mobile phone single-vendor dominance. Combined with human-computer interfaces, the prospect of single-vendor control over the operating system of a substantial portion of your brain is rather frightening. As to the core internet itself – I suspect it won’t actually change a lot. Just like railroads or highways, infrastructure sees short periods of time of great innovation, and then a long plateau. I don’t think the internet has seen much change in the last 10 years aside from being bigger, colder, harsher and filled with more bad actors, so I suspect that plateau will continue more or less for another 50.”

A principal researcher for one of the world’s top five technology companies commented, “What technology makes possible in 50 years depends on what technology exists in 50 years. Will Moore’s Law and related semiconductor accelerations be extended through quantum, optical, or some other computing? A breakthrough there in the next 20 years would lead to unimaginable consequences in 50 years. But it seems more likely that they won’t, so we can expect a slow realization of the full capabilities of technology that is not qualitatively different from today’s. That leaves substantial room for increased capability as cloud computing and the Internet of Things get worked out with modest assists from data science and machine learning, and as our attentional balance shifts from novelty and eye-catching visual design to utility and productivity.”

Visions of the future: ‘Brave New World,’ ‘1984’ or ‘The Jetsons’

A number of respondents shared colorful descriptions of what they expect the world might look like in 2069.

Garland McCoy , founder and chief development officer of the Technology Education Institute, wrote, “On the first day there was analog voice and, behold, it was good. On the second day there was human-generated data/content, and it was pleasing to the people. On the third day machines began to talk directly to machines and this was seen as excellent indeed. On the fourth day, machines began to design their own network of networks (e.g., LoRaWAN, a device-to-device architecture), and behold great efficiency spread out upon the land. On the fifth day humans began to leave their homes and assemble at the town square to talk among themselves face-to-face and this brought great joy to the multitudes. On the sixth day, just as the wise men from the Semiconductor Industry Association had predicted, the world was unable to generate enough electricity to feed all of the chips/devices the wise men had created and darkness descended upon the land. On the seventh day the people rested because that was all they could do. And so endeth the lesson.”

Baratunde Thurston , futurist, former director of digital at The Onion and co-founder of the comedy/technology startup Cultivated Wit, wrote, “With land and servers, Amazon was able to accelerate the merger of the space formerly referred to as ‘the internet’ and the realm once called ‘meatspace,’ or ‘in real life,’ such that there is no longer a distinction – it is all referred to now as ‘The Prime Network.’ … Once it was proven in 2045 that a hybrid human-networked intelligence could manage and draft legislation far better than inconsistent and infinitely corruptible humans, the U.S. Congress was replaced with a dynamic network model accounting for the concerns of citizens yet bound by resource constraints and established laws. This happened too late to save Miami, which is now only accessible by automated submarine, historical tours or VR re-creations, but it did help rally the resources required to halt The Ten-Year Burn in California and restore much of Lower Manhattan. Americans now spend roughly 30 percent of their waking hours in SR (simulated reality) environments. Many spend this time reliving revised personal histories which make them the most popular students in high school even though industrial school farms were abolished 25 years ago and replaced by personalized Mental Training Plenaries that dynamically adjusted to the learning styles and needs of each student. Another 20% of waking hours are spent passively consuming immersive narratives customized to each person. In order to maintain social cohesion, however, these personalized narratives have overlapping characters, plot points and themes so that people have something to talk about when they encounter their fellow humans. Americans split the rest of their time between eating, picking up litter and serving on the obligatory Algorithmic Oversight Committees. Advertising has been banned. Once we launched the 360 Accounting Project to measure the impact of nearly all human endeavors and score them on various elements, the practice of advertising was found to have a negative social, financial, emotional, ecological and moral return on investment. Any human or hybrid engaged in advertising is disconnected from The Prime Network for six hours on a first offense, one day for a second offense and permanently for a third offense. Amazon is exempt from the advertising ban per the Terms of Service that govern all Prime citizens.”

Jamais Cascio , research fellow at the Institute for the Future, wrote, “I imagine three broad scenarios for AI in 50 years. No. 1, EVERYWARE, is a crisis-management world trying to head off climate catastrophe. Autonomous systems under the direction of governance institutions (which may not be actual governments) will be adapting our physical spaces and behaviors to be able to deal with persistent heat waves, droughts, wildland fires, category 6 hurricanes, etc. Our routines will be shaped by a drive to a minimal footprint and a need to make better longer-term decisions. This may not be ‘green fascism’ precisely, but that will be a common invective. The dominant design language here is *visible control* – of public spaces, of economic behavior, of personal interactions, etc. AI is a climate-protective Jiminy Cricket with an attitude. No. 2, ABANDONWARE, is also crisis-driven, but here various environmental, economic and political crises greatly limit the role of AI in our lives. There will be mistrust of AI-based systems, and strong pushback against any kinds of human-displacement. This likely results from political and economic disasters in the 2040s-ish linked to giving too much control to AI-based systems: institutional decisions driven by strategies to maximize profits and control, while minimizing uncertainty and risk. AIs messing around with elections, overriding community decisions and otherwise pushing aside fuzzy emotional thinking with algorithmic logic goes swiftly from being occasionally annoying to infuriatingly commonplace. The dominant design language for AI here is submissive . AI is still around, but generally whimpering in the corner. No. 3, SUPERWARE, is the world described in the first answer (AI common but largely invisible) turned up to 11. In this scenario, AI systems focus on helping people live well and with minimal harm to others. By 2069, the only jobs performed by humans in the post-industrial, post-information world require significant emotional labor, unique creative gifts or are simply done out of the pleasure of doing them. The newly developed world is still adapting, but what amounts to the end of 19th century industrial capitalism forces this change. AI-based systems are dealing with climate, global health, and the like, but in ways meant to increase human well-being over the long term. Most people born before 2020 hate this, seeing it as ‘robo-nanny state socialism’ and ‘undermining human dignity’ even as they take advantage of the benefits. The dominant design language for AI here is ‘caring.’ Machines of Loving Grace, whether you like it or not.”

Ebenezer Baldwin Bowles , author, editor and journalist, responded, “The next 50 years? A time frame ending in 2069? As grandpa would say, ‘I can’t imagine.’ But we must try or else fall silent. 1) The best and brightest will communicate brain-to-brain through implants linked to synapses altered by quantum surgery. Encrypted and delivered by carbon-silicone hybrid technology, this radical expression of the desire to communicate will create new systems of power and control by the planet’s ruling class. 2) Global nation-states, empowered by iron-fisted control of electronic media and financial systems, protected by police drones and robots through continuous surveillance systems, and sustained by a willing populous, will oversee legions of workers dedicated to the maintenance of the ruling class of the 1%. 3) The development of no-cost neighborhood-based replicator stations will provide unlimited access for everyone to nutritious food, comfortable clothing suitable to local climates, every imaginable item necessary to maintain a household, and personal necessities linked to popular concepts of comfort and entertainment. The replicator system, an advanced expression of today’s 3D printing technology, will serve as a means of control of the working and professional classes – a chicken in every pot times 10. So, robots and drones with the Evil Eye to watch and control the people. Unlimited food, clothing and shelter to cow the masses into happy servitude. Total reliance on AI and its tendrils to supply the necessities of life. What a wonder to behold in 2069. Think back to 1969. Even the most imaginative thinkers missed the one crucial aspect of digital control of everyday life in 2018: the surveillance camera. Who back then could imagine the total loss of privacy and personal independence we live with today? We are swallowed up by digital influences now. In 50 years the influences shall morph into total control, and the world we know now shall be devoured by electric ones and zeroes, one after another in the rapid march to dissolution.”

Jerry Michalski , founder of the Relationship Economy eXpedition, said, “Most internet-connected devices have been p0wn3d and are in the Dark Net, making most systems scary and unstable. Super-small drones changed warfare and policing, making it difficult and expensive to hide. Anyone who feels at risk travels in a self-sufficient chamber to avoid infiltration. Meanwhile, a quarter of humanity has figured out how to hear one another and live in abundance, but they have to keep below the radar…. Over 50 years many more things will change, but the forces at play are shoving society in negative directions. People who want better will achieve progress, but I see a dystopian future for the majority of humanity.”

A research scientist who works for Google said, “You want a 50-year prediction? I’m not sure what to say. Google is only 20 years old – would you have predicted that (and all of the side effects) back in 1968 (50 years ago)? Likewise, Amazon is 24 years out. My point is that predicting tech changes in the online/software space is really, really hard. Remember the rise (and fall, and rise?) of walled gardens? Did anyone predict the fall of AOL back when it was the biggest company around? A few things I can predict with confidence: 1) There will be new business models that we do not yet know about. Amazon was enabled by a host of technologies that didn’t exist in 1968. Play that same tune forward. 2) There will be a backlash against the Internet of Things. Just sayin’. 3) Eventually, we’ll figure out how to do sufficiently high frame-rate and precision registration so that VR/AR actually works. Both will be interesting; both have the possibility of being world-changers. (But I don’t know how that will happen yet. Probably, it will happen in a way we don’t yet understand.) 4) Bandwidth will eventually make it into the entire third world. That will change the online landscape as much as when the ARPANET became open for commercial purposes. (That is, dramatically.) 5) The social effects of connectivity (especially in the third world) + bandwidth + radicalized pockets of folks will make the current internet battles seem tame. AI will be important, but it’s not going to be the big driver.”

The chief marketing officer for a technology-based company said, “The Internet of Things and AI will exponentially help to automate and organize society and the world at large by enhancing existing infrastructure and innovating new ones.”

An anonymous respondent wrote, “Widespread networked computing will have collapsed 50 years from now, as will society.”

Sign up for our weekly newsletter

Fresh data delivery Saturday mornings

Sign up for The Briefing

Weekly updates on the world of news & information

  • Emerging Technology
  • Future of the Internet (Project)
  • Internet of Things
  • Online Privacy & Security

A quarter of U.S. teachers say AI tools do more harm than good in K-12 education

Many americans think generative ai programs should credit the sources they rely on, americans’ use of chatgpt is ticking up, but few trust its election information, q&a: how we used large language models to identify guests on popular podcasts, computer chips in human brains: how americans view the technology amid recent advances, most popular, report materials.

  • Shareable quotes from experts about the next 50 years of digital life

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

© 2024 Pew Research Center

Science and Technology: Impact on Human Life Essay

  • To find inspiration for your paper and overcome writer’s block
  • As a source of information (ensure proper referencing)
  • As a template for you assignment

Introduction

Part i: science in personal and professional life, part ii: science and technology in a multicultural world.

Science plays an important role in everyday life, and people depend on technologies in a variety of ways by creating, using, and improving them regularly. Sometimes, a person hardly notes how inevitable the impact of science can be on personal or professional life. Evaluating such technologies as the Internet, smartphones, notebooks, smartwatches, and brain-medicine interfaces helps recognize their positive and negative outcomes compared to the period when traditional lifestyles and natural resources like ginger were highly appreciated.

Most people are confident in their independence and neglect multiple technologies that determine their lives. During the last 25 years, technology has dramatically changed human interactions (Muslin, 2020). In addition to domestic technological discoveries like washing machines and stoves, four technologies, namely, the Internet, smartphones, notebooks, and smartwatches, are used throughout the day. Despite their evident advantages in communication, data exchange, and connection, some negative impacts should not be ignored.

Regarding my personal life changes, these technologies provoke mental health changes such as depression. I prefer to avoid my dependence on all these technologies that imperceptibly shape everyday activities. However, I constantly check my vitals, messengers, and calls not to miss something important. On the one hand, this idea of control helps improve my life and makes it logical. On the other hand, I am concerned about such relationships with technologies in my life. Similar negative impacts on society emerge when people prefer to communicate virtually instead of paying attention to reality. Technologies compromise social relationships because individuals are eager to choose something easier that requires less movement or participation, neglecting their unique chances to live a real life. They also challenge even the environment because either smartwatches or notebooks need energy that is associated with air pollution, climate change, and other harmful emissions (Trefil & Hazen, 2016). Modern technologies facilitate human life, but health, social, and environmental outcomes remain dangerous.

Thinking about my day, I cannot imagine another scientific discovery that makes this life possible except the Internet. Today, more devices have become connected to the Internet, including cars, appliances, and personal computers (Thompson, 2016). With time, people get an opportunity to use the Internet for multiple purposes to store their personal information, business documentation, music, and other files that have a meaning in their lives. The Internet defines the quality of human relationships, starting with healthcare data about a child and ending with online photos after the person’s death.

Although the Internet was invented at the end of the 1980s, this technology was implemented for everyday use in the middle of the 1990s. All people admired such possibilities as a connection across the globe, increased job opportunities, regular information flows, a variety of choices, online purchases, and good education opportunities (Olenski, 2018). It was a true belief that the Internet made society free from real-life boundaries and limitations. However, with time, its negative sides were revealed, including decreased face-to-face engagement, laziness, and the promotion of inappropriate content (Olenski, 2018). When people prefer their virtual achievements and progress but forget about real obligations like parenting, education, or keeping a healthy lifestyle, the Internet is no longer a positive scientific discovery but a serious problem.

Many discussions are developed to identify the overall impact of the Internet as a major scientific discovery. Modern people cannot imagine a day without using the Internet for working, educational, or personal purposes. However, when online life becomes someone’s obsession, the negatives prevail over its positives. Therefore, the human factor and real-life preferences should always be recognized and promoted. During the pandemic, the Internet is a priceless contribution that helps deal with isolation and mental health challenges. Some people cannot reach each other because of family issues or business trips, and the Internet is the only reliable and permanent means of connection. Thus, such positives overweight the negatives overall if everything is used rationally.

The Internet makes it possible for healthcare providers to exchange their knowledge and experiences from different parts of the world. This possibility explains the spread of the westernized high-tech research approach to medical treatment and the promotion of science in a multicultural care world. Biomedical research changes the way how people are diagnosed and treated. Recent genomic discoveries help predict the possibility of cancer and human predisposition to other incurable diseases to improve awareness of health conditions. The benefit of new brain-interface technologies (BMI) is life improvement for disabled people to move their prosthetics easily (The American Society of Mechanical Engineers, 2016). Instead of staying passive, individuals use smart technology to hold subjects, open doors, and receive calls. BMI has a high price, but its impact is priceless. At the same time, some risks of high-tech research exist in medical treatment. The American Society of Mechanical Engineers (2016) underlines damaged neurons and fibers depend on what drugs are delivered to the system and how. The transmission of electrical signals is not always stable, and the safety of BMI processes is hardly guaranteed.

Some populations reject technologies in medical treatment and prefer to use natural resources to stabilize their health. For example, ginger is characterized by several positive clinical applications in China. Researchers believe that this type of alternative medicine effectively manages nausea, vomiting, and dizziness (Anh et al., 2020). Its major advantage is reported by pregnant patients who use ginger to predict morning sickness, unnecessary inflammation, and nausea. However, like any medication, ginger has its adverse effects, covering gastrointestinal and cardiovascular symptoms (Anh et al., 2020). The disadvantage of using traditional medicine is its unpredictable action time. When immediate help is required, herbs and other products are less effective than a specially created drug or injection.

There are many reasons for having multicultural approaches to medical treatment, including ethical recognition, respect, diversity, and improved understanding of health issues. It is not enough to diagnose a patient and choose a care plan. People want to feel support, and if one culture misses some perspectives, another culture improves the situation. Western and traditional cultural approaches may be improved by drawing upon the other. However, this combination diminishes the effects of traditions and the worth of technology in medical treatment. Instead of uniting options, it is better to enhance differences and underline the importance of each approach separately. The challenges of combining these approaches vary from differences in religious beliefs to financial problems. All these controversies between science and culture are necessary for medical treatment because they offer options for people and underline the uniqueness of populations and technological progress.

In general, science and traditions are two integral elements of human life. People strive to make their unique contributions to technology and invent the devices that facilitate human activities. At the same time, they never neglect respect for traditions and cultural diversity. Therefore, high-tech and traditional medicine approaches are commonly discussed and promoted today to identify more positive impacts and reduce negative associations and challenges.

The American Society of Mechanical Engineers. (2016). Top 5 advances in medical technology . ASME. Web.

Anh, N. H., Kim, S. J., Long, N. P., Min, J. E., Yoon, Y. C., Lee, E. G., Kim, M., Kim, T. J., Yang, Y. Y., Son, E. Y., Yoon, S. J., Diem, N. C., Kim, H. M., & Kwon, S. W. (2020). Ginger on human health: A comprehensive systematic review of 109 randomized controlled trials. Nutrients, 12 (1). Web.

Musil, S. (2020). 25 technologies that have changed the world . Cnet. Web.

Olenski, S. (2018). The benefits and challenges of being an online – Only brand. Forbes . Web.

Thompson, C. (2016). 21 technology tipping points we will reach by 2030 . Insider. Web.

Trefil, J., & Hazen, R. M. (2016). The sciences: An integrated approach (8th ed.). Wiley.

  • Promotion Plan for Sigma Bracelets
  • Smart Watches for Cardiac Arrest Alerts in the Elderly
  • The Smartwatch Bands Firm's Business Model
  • Discussion: Electric Cars and the Future
  • Phones and Teenagers’ Mental Health Connection
  • Synthetic Fabrics as Technological Development
  • Technological Challenges in Business
  • Smart Technology for Enhancing Guest Experience in Luxury Hotels
  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2022, December 24). Science and Technology: Impact on Human Life. https://ivypanda.com/essays/science-and-technology-impact-on-human-life/

"Science and Technology: Impact on Human Life." IvyPanda , 24 Dec. 2022, ivypanda.com/essays/science-and-technology-impact-on-human-life/.

IvyPanda . (2022) 'Science and Technology: Impact on Human Life'. 24 December.

IvyPanda . 2022. "Science and Technology: Impact on Human Life." December 24, 2022. https://ivypanda.com/essays/science-and-technology-impact-on-human-life/.

1. IvyPanda . "Science and Technology: Impact on Human Life." December 24, 2022. https://ivypanda.com/essays/science-and-technology-impact-on-human-life/.

Bibliography

IvyPanda . "Science and Technology: Impact on Human Life." December 24, 2022. https://ivypanda.com/essays/science-and-technology-impact-on-human-life/.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • 20 June 2023

The world’s plan to make humanity sustainable is failing. Science can do more to save it

You have full access to this article via your institution.

A woman wearing a mask walks past a large sign advertising the UN's Sustainable Development Goals

Between now and September, Nature will be publishing a special series of editorials covering each of the 17 Sustainable Development Goals. Credit: Stanislav Kogiku/SOPA Images/LightRocket via Getty

Many people would be unable to name even one of the 17 United Nations Sustainable Development Goals (SDGs), which are at the heart of an international project that aims to end poverty and achieve equality while protecting the environment. From this week, to help raise awareness, we at the Nature Portfolio journals will intensify our ongoing efforts to publish research and commentary on the SDGs.

The SDGs and their 169 associated targets are among humanity’s best chance of dealing with global crises, from climate change to economic hardship. World leaders agreed the goals in 2015 and set a 2030 deadline to achieve them. This year, at the half-way point, it looks likely that none of the goals and just 12% of the targets will be met.

In September, world leaders will gather in New York City to come up with a rescue plan. And between now and then, Nature will be publishing a series of editorials focusing on the different SDGs, covering what has and hasn’t been achieved, what can be done to improve matters, and the part the global scientific community has to play.

The failure to meet even one of the SDGs is not for want of trying. Worldwide, researchers have been aligning their work with the SDGs, along with other global efforts such as UN conventions on climate change and biodiversity loss. Unfortunately, fracturing geopolitics is hindering international cooperation. In addition, there is limited cooperation and coordination across topics and between disciplines.

There is a need to give more consideration to complementarities and trade-offs between the different SDGs. For example, action to develop affordable and clean energy (SDG7) to tackle climate change (SDG13) can have negative local effects on biodiversity (SDGs 14 and 15) through the construction and operation of facilities such as wind and solar farms. And although finance for coal-fired power is an effective way to create work and economic growth (SDG8), it is bad news for health and well-being (SDG3), as well as for the environment. The knowledge of these trade-offs is often given insufficient consideration in policymaking.

science and technology can be build and destroy humanity essay

Do the science on sustainability now

Last week, an independent group of science advisers to the UN proposed a way forwards. Their 2023 Global Sustainable Development Report (GSDR) summarizes where the SDGs are failing, and what can be done to rescue them. It reiterates the need for transformational change to get the world onto a sustainable path. Crucially, it recognizes the interconnectedness of the goals and targets.

Like its predecessor , published in 2019, the report recategorizes the SDGs into six “entry points”: human well-being and capabilities; sustainable and just economies; sustainable food systems and healthy nutrition patterns; energy decarbonization with universal access; urban and peri-urban development; and global environmental commons. To progress on human well-being, for example, the report recommends scaling up investment in primary health care and ensuring access to lifesaving interventions; accelerating enrolment in secondary education; and increasing investment in water and sanitation infrastructure.

The authors recognize that the path to sustainability must also include abolishing unsustainable practices, while taking into account the economic and social pain that this can cause. For example, increasing the availability of renewable energy won’t, on its own, tackle climate change: fossil fuels must also be phased out. As we wrote last week ( Nature 618 , 433; 2023 ), there is active resistance to this move and a genuine need to support affected communities, such as those that have relied on the coal industry for decades. Such scenarios don’t apply only to reaching energy and climate goals.

The GSDR represents welcome progress on the ‘what’ of meeting the SDGs. It also proposes what to do on the ‘how’. The necessary transformations will be expensive, the authors say — requiring extra annual public and private investment of up to US$2.5 trillion. For efforts to succeed, new ways of governing will be required, with the creation of new institutions and the reform of old ones to put sustainability front and centre. Individual and collective action of the kind already under way will also be needed, but on a bigger scale. And people must be given the right resources and skills to complete the task. This will be especially important in low- and middle-income countries (LMICs).

Implicit — and to a degree explicit — in all this is changing how science itself is done. The report argues that the actions that steer the world towards a sustainable path must be rooted in science that is multidisciplinary, equitable and inclusive, openly shared and widely trusted, and “socially robust” — in short, responsive to social context and social needs. As the authors acknowledge, for that to happen, global science needs to evolve. Knowledge needs to be more accessible than it is at present, and the production of that knowledge needs to be more open, too, recognizing, for example, the value of Indigenous and local knowledge to sustainable innovation.

We know from a separate UN study published in 2021 that science in LMICs is already much more aligned with the SDGs than is science in high-income countries. And LMICs have published a much higher volume of research relating to the SDGs (see ‘Sustainability science’). The challenge is how to improve the situation in high-income nations. Widespread improvement would be truly game-changing for sustainability.

Sustainability science: Volume of publications related to the United Nations Sustainable Development Goals from 2000 to 2019.

Adapted from: Changing Directions: Steering Science, Technology and Innovation towards the Sustainable Development Goals

That’s where we’ll take our cue. We’ll assess the evidence and talk to researchers about the state of play on the SDGs, and explore questions that researchers can help to answer. Right now, a sustainable future remains as far away as ever. If there’s even a small chance that we can still achieve the SDGs by 2030, we need to seize it with both hands. As so many have already said, there is no planet B.

Nature 618 , 647 (2023)

doi: https://doi.org/10.1038/d41586-023-01989-9

Reprints and permissions

Related Articles

science and technology can be build and destroy humanity essay

  • Sustainability
  • Climate change
  • Biodiversity

World's first wooden satellite could herald era of greener space exploration

World's first wooden satellite could herald era of greener space exploration

News 07 JUN 24

Mega engineering projects won’t stop a repeat of the devastating southern Brazil floods

Correspondence 04 JUN 24

Organic product legislation ignores agricultural plastic use — that must change

What’s the best way to tackle climate change? An ‘evidence bank’ could help scientists find answers

What’s the best way to tackle climate change? An ‘evidence bank’ could help scientists find answers

News 06 JUN 24

Capturing carbon dioxide from air with charged-sorbents

Capturing carbon dioxide from air with charged-sorbents

Article 05 JUN 24

Companies inadvertently fund online misinformation despite consumer backlash

Companies inadvertently fund online misinformation despite consumer backlash

Nature’s message to South Africa’s next government: talk to your researchers

Nature’s message to South Africa’s next government: talk to your researchers

Editorial 29 MAY 24

Inequality is bad — but that doesn’t mean the rich are

Correspondence 14 MAY 24

Faculty Positions in School of Engineering, Westlake University

The School of Engineering (SOE) at Westlake University is seeking to fill multiple tenured or tenure-track faculty positions in all ranks.

Hangzhou, Zhejiang, China

Westlake University

science and technology can be build and destroy humanity essay

High-Level Talents at the First Affiliated Hospital of Nanchang University

For clinical medicine and basic medicine; basic research of emerging inter-disciplines and medical big data.

Nanchang, Jiangxi, China

The First Affiliated Hospital of Nanchang University

science and technology can be build and destroy humanity essay

Professor/Associate Professor/Assistant Professor/Senior Lecturer/Lecturer

The School of Science and Engineering (SSE) at The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen) sincerely invites applications for mul...

Shenzhen, China

The Chinese University of Hong Kong, Shenzhen (CUHK Shenzhen)

science and technology can be build and destroy humanity essay

Faculty Positions& Postdoctoral Research Fellow, School of Optical and Electronic Information, HUST

Job Opportunities: Leading talents, young talents, overseas outstanding young scholars, postdoctoral researchers.

Wuhan, Hubei, China

School of Optical and Electronic Information, Huazhong University of Science and Technology

science and technology can be build and destroy humanity essay

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Economic Inequality, Part 1: Where We Are and Why

Business, Ethics & Society

Economic Inequality, Part 1: Where We Are and Why

EPIC: An Effectuation Boot Camp for Startups in Bangalore

Entrepreneurship & Innovation

EPIC: An Effectuation Boot Camp for Startups in Bangalore

Does a Crisis Lurk in the Shadows?

Finance & Accounting

Does a Crisis Lurk in the Shadows?

11 Key Characteristics of a Global Business Leader

Leadership & Management

11 Key Characteristics of a Global Business Leader

science and technology can be build and destroy humanity essay

How Is Technology Transforming Humanity? Q&A with Darden Professor Roshni Raveendhran

Insights from

science and technology can be build and destroy humanity essay

Roshni Raveendhran

Gosia glinska.

Technology is reshaping the way we live, work, play, and interact. In addition to creating new opportunities for human flourishing, artificial intelligence and other technological advances are posing risks for humanity that are hard to predict.

To promote better understanding of the human-technology relationship, the Psychology of Technology Institute , in partnership with Darden’s Batten Institute for Entrepreneurship and Innovation, is convening the fourth annual  New Directions in Research on the Psychology of Technology Conference . Bringing together a cross-section of scholars, industry leaders and policy experts, the conference will take place at the UVA Darden Sands Family Grounds in the Washington, D.C., area 8–9 November.

We sat down to talk about technology’s growing impact on our lives with one of the conference organizers, Darden Professor Roshni Raveendhran, whose research explores the intersection of psychology and technology.

Q: What will be the theme of this year’s Psychology of Technology Conference?

Raveendhran:    As technology moves into the workplace — changing the way we work and engage with each other — it’s important to study the interaction between technology and humanity. How does tech influence humans? How do humans influence the creation of new, interesting technologies? And this year, specifically, we’re interested in exploring automation, artificial intelligence, algorithmic decision-making, and technology and human well-being.

Q: What’s the focus of your own research?

Raveendhran:    I study the future of work, and I’m interested in questions that are at the intersection of technology and humanity in the context of the workplace. So, some of my research explores what it means for humans to be working with novel technologies, such as AI, and various behavior tracking products that are coming to the workplace. How do novel technologies change our behavior toward each other? For example, why might managers use technology when dealing with others in uncomfortable situations? Or, how effective is it to use AI for social support at work? I explore these questions with a psychological lens at the micro level, where we’re looking at individual human behavior, and at the macro level, where we’re looking at how human behaviors around technology adoption influence organizational strategy and performance.

Q: What prompted you to explore those topics?

Raveendhran:    When I was in grad school, I was interested in understanding our experience of autonomy. Technology allows us to do things without the help of others and be more autonomous. For example, we used to ask people for directions, and now we have maps on our phones, and that’s great. I wanted to explore how our experiences of autonomy affect our view of technology and how technology is influencing our psychological experiences and behaviors. I believe that we should think about how to leverage technological advances in order to augment us as humans as opposed to view technology as our enemy. I want my research to inform society about how to use novel technologies responsibly.

Q: How are some of the behavior-tracking technologies you mentioned being used in the workplace?

Raveendhran: One of the best examples of behavior tracking products is the smartbadge. There’s a company called Humanyze that makes these badges. The badges have sensors, microphones and motion detectors, through which they measure the amount of face-to-face interactions you have — and even tone of voice — and give you feedback about your social interactions. Many companies are using them in interesting ways. So, if teams are not communicating with each other, or if employees are not reaching out and learning from each other, those badges can track that and suggest connecting with someone who could be a resource. Lots of those wearables can track various aspects of your behavior, including emotions, and that’s a big part of my research.

Q: What are your thoughts on government-run behavior tracking — for example, in China, the kind that results in each citizen having a “social score”?

Raveendhran: When I studied behavior tracking, I found that people are open to being tracked, but the minute they realize that there’s a human behind that technology, it comes across as evaluative and not informational. I believe that so long as tracking makes people feel judged and threatened, it’s definitely misused, especially when the intention is to punish people or let others in the community know about some people’s low social standing.

Q: It’s hard to function in today’s society without giving up some of our data, but shouldn’t we at least care about what’s happening with that data?

Raveendhran:    Yes, absolutely. How salient are our privacy concerns when we think about adopting really cool novel technologies? Are we thinking about those concerns consciously? I’m working on a project where we’re exploring whether people are more likely to give up privacy and data for the sake of convenience. For example, I could record myself while I’m sleeping and try to make sense of it all, but that takes considerable effort. Or, I can just wear a device and get a report on how well I’m sleeping if I give up my sleep data. But then a third-party company is getting my data. We are exploring some of these tradeoffs in this project.

Q: Some experts consider AI to be “the single most important and daunting challenge that humanity has ever faced,” to quote Oxford University’s Nick Bostrom. How do you view this challenge?

Raveendhran: Because I study how we interact with technology, what I’m most concerned about is people being too quick to adopt AI without thinking about why they want it, and without knowing much about how AI can be misused. Or, on the flip side, people being too resistant to it, because they think, “Oh, AI is bad” and, as a result, miss the opportunities technology creates. So, the Facebook mishap is a big example of negligence, of technology being misused. But the same Facebook has helped connect so many families and friends.

Technology itself isn’t good or bad . The way people are using technology can be scary when they aren’t making conscious choices. So how do we nudge people to apply more conscious choices to that context? That’s a question I’m really passionate about.

Roshni Raveendhran

Assistant Professor of Business Administration

Raveendhran’s research focuses on the future of work: how technological advancements influence organizational actors and business practices, the integration of novel technologies into the workplace and how organizations can increase the effectiveness of their human resource management practices to address the changing nature of work.

With expertise in leadership and decision-making, Raveendhran holds a bachelor of arts in psychology from the University of Texas at Arlington and a Ph.D. in business administration from the University of Southern California, where she received multiple teaching awards. Her dissertation on behavior-tracking technologies was recognized as a finalist in the INFORMS Best Dissertation competition.

B.A., University of Texas at Arlington; Ph.D., University of Southern California

READ FULL BIO

Leadership & Management and Data & Analytics

In partnership with.

Batten Institute

Voting by Committee: A Show of Hands

Decision by Committee? Ask for a Show of Hands

Netflix and Chappelle

Pop Culture and Corporate Culture: Netflix and the Dave Chappelle Controversy

Robots at the Helm

Robots at the Helm: The Hidden Cost of Algorithmic Management

Negotiating DEIB

Expanding the Pie to Drive Alignment Around Inclusion Efforts: 3 Tactics

Effective Decision Making

Data & Analytics

Understanding Decision-Making: Inherent Risk Preferences

Newsletter signup.

UN logo

  • Advisory Board
  • Policy Dialogues
  • Organigramme
  • Intergovernmental Support
  • Capacity Building
  • Climate Action
  • Global Partnerships
  • Leaving No One Behind
  • Science, Technology and Innovation
  • Strengthening Institutions
  • Thought Leadership
  • Latest from DESA
  • Publications
  • Policy Briefs
  • Working Papers
  • UN DESA Voice

science and technology can be build and destroy humanity essay

Can science and technology really help solve global problems? A UN forum debates vital question

Related information.

  • 2018 Integration Segment of the Economic and Social Council
  • Watch webcast of the 2018 ECOSOC Integration Segment - Day 1 meeting

About UN DESA

Un desa products, un desa divisions.

  • Office of Intergovernmental Support and Coordination for Sustainable Development
  • Division for Sustainable Development Goals
  • Population Division
  • Division for Public Institutions and Digital Government
  • Financing for Sustainable Development Office
  • Division for Inclusive Social Development
  • Statistics Division
  • Economic Analysis and Policy Division
  • United Nations Forum on Forests
  • Capacity Development Programme Management Office
  • News, Stories & Speeches
  • Get Involved
  • Structure and leadership
  • Committee of Permanent Representatives
  • UN Environment Assembly
  • Funding and partnerships
  • Policies and strategies
  • Evaluation Office
  • Secretariats and Conventions

Children on stage

  • Asia and the Pacific
  • Latin America and the Caribbean
  • New York Office
  • North America
  • Climate action
  • Nature action
  • Chemicals and pollution action
  • Digital Transformations
  • Disasters and conflicts
  • Environment under review
  • Environmental rights and governance
  • Extractives
  • Fresh Water
  • Green economy
  • Ocean, seas and coasts
  • Resource efficiency
  • Sustainable Development Goals
  • Youth, education and environment
  • Publications & data

science and technology can be build and destroy humanity essay

Science as a catalyst for peace and development

When Christopher Columbus arrived in the Caribbean in the late 15 th century, he and his crew had spent months sleeping on a hard and dirty deck—most likely infested with vermin. It is no surprise that the islands seemed like paradise. Not only did the sailors finally feel the land beneath their feet again, but the indigenous people slept comfortably in nets between the trees, rather than on the hard floor. It was a big difference from the sleepless months of hardship the sailors had just endured. On his trip back to Spain, Columbus took these indigenous nets with him, and before long sailors were relying on hammocks to stay comfortable on overnight voyages.

Hammocks are not the only invention that we have thanks to indigenous nations and communities. Over half of the crops now in cultivation across the globe were domesticated by indigenous peoples in the Americas, including corn, which alone provides nearly a quarter of human nutrition worldwide. In the medical field, a wide range of medications exist partially thanks to traditional medicine from around the world, including several pain relievers, drugs for dieting and antioxidant and antibacterial products.

More importantly, traditional ecological knowledge has been gaining ground in recent years as a crucial aspect of natural resource management and our understanding of climate change. Traditional ecological knowledge refers to indigenous and other forms of traditional knowledge regarding the sustainability of local resources. It is often used to sustain local populations and maintain resources necessary for survival.

Despite the benefits of indigenous knowledge, today the relationship between what some call “Western” science and traditional knowledge is difficult at best. Today, Western science plays a role in every aspect of our lives, from the phones and computers we use every day to the very food on our plates. But the most important question today is how we can use that science to transform our society—to a new, sustainable one rooted in healthy environments. A healthy collaboration between Western science and indigenous knowledge systems could help us to accomplish that, but to do so, the two must first gain a better understanding of each other.

image

This World Science Day for Peace and Development , celebrated annually on 10 November, is themed “Open science, leaving no one behind”. Open science is the movement to make scientific research and dissemination accessible to all levels of society, amateur or professional. One way that open science could lead to a sustainable future is by helping to capture the experience of indigenous peoples in future assessments of climate change and to reflect indigenous knowledge on a global scale. In doing so, it could help to do away with the old rivalry between Western science and indigenous knowledge systems.

“A global shift to open science would support countries in the environmentally sound management of chemicals and waste,” said Jacqueline Alvarez, Senior Programme Management Officer for the Chemicals and Health Branch of the UN Environment Programme (UNEP). “Research on the growing impact of emerging issues on human health and the environment is crucial for building effective development plans. Likewise, assessments of risks and monitoring of environmental trends can play a decisive role. Making that research available to a wider audience allows us to act sooner on the most urgent issues, and in doing so to bridge science to policy and policy to science.”

The divide between science and traditional knowledge is largely driven by the inability of experts on both sides to fully understand each other’s concepts. Academic texts on indigenous knowledge systems have almost exclusively been written by Western scientific researchers. This not only acts as a funnel for traditional knowledge, but it is also a one-way street. Without a good understanding of science by traditional knowledge holders, there is no way for these two knowledge systems to work together effectively and sustainably.

Open science could help to alleviate this issue by opening science up so that a more diverse group of people have access to it, including traditional knowledge holders. This could drive understanding and encourage collaboration between scientific researchers and traditional knowledge holders. Not only will this help to improve our chances of a sustainable future, but also to dismantle colonial structures that persist in the societies, politics and economies of the modern world.

Aside from the benefits for a global society and a sustainable future, indigenous communities could benefit from this relationship as well. Many indigenous communities around the world are unable to access clean drinking water, have elevated levels of toxins in the water and soil, or are surrounded by chemical production and processing facilities.

Advancing and scaling new technologies that minimize hazardous chemicals and waste, make recycling and recovering these wastes easier, and create value from products in their end-of-life stage could radically alter the chemicals and waste conversation, especially for the most affected minorities. However, the success of new and innovative technologies and other adaptive measures depends on their use and application. Open science can help to increase public understanding of these innovations, making cooperation by governments, communities, businesses and organizations more likely.

The idea of open science fits well into most indigenous knowledge systems. For example, indigenous thinkers generally don’t consider knowledge something that can be ‘owned’, especially not by a single individual. Of course, customary laws do regulate the use of traditional knowledge to make sure that people recognize and respect the sacred history and connotations that such knowledge might hold. For these reasons, indigenous peoples have expressed the need for an innovative means of protection of their knowledge that promotes and strengthens their intellectual and cultural context.

Open minds are the simple precursor to open science, and they have the power to change the world. Therefore, this World Science Day for Peace and Development should be approached with an open mind. By acknowledging that there is much to be learned from each other, global society could benefit not just from hammocks in the future, but from solutions to our most pressing sustainability issues.

  • Natural Resources
  • Civil society

Related Content

A chemical plant at dusk, lights blazing

Related Sustainable Development Goals

science and technology can be build and destroy humanity essay

© 2024 UNEP Terms of Use Privacy   Report Project Concern Report Scam Contact Us

IMAGES

  1. The Effect of Technology on Humanity in the "Brave New World" Free

    science and technology can be build and destroy humanity essay

  2. Science, Technology and Society: Development as The Survival of Mankind

    science and technology can be build and destroy humanity essay

  3. Humanity Essay

    science and technology can be build and destroy humanity essay

  4. Science, Technology and Society Essay.docx

    science and technology can be build and destroy humanity essay

  5. ⇉Effect of Technology on Humanity Essay Example

    science and technology can be build and destroy humanity essay

  6. Write a short essay on Science and Technology

    science and technology can be build and destroy humanity essay

VIDEO

  1. Essay On Integrated Approach in science and technology for a sustainable future|Nation science day

  2. How technology is shaping our lives

  3. Essay on Science and Technology

  4. ai will destroy humans ? #science #scinecefacts

  5. Science and Technology Essay writing in English

  6. AI-Powered Business Builder

COMMENTS

  1. Could science destroy the world? These scholars want to save us ...

    "Sowing fear about hypothetical disasters, far from safeguarding the future of humanity, can endanger it," he writes in his upcoming book Enlightenment Now: The Case for Reason, Science, Humanism, and Progress. But advocates predict the field will only get more important as scientific and technological progress accelerates.

  2. Technology That Could End Humanity—and How to Stop It

    In his 2014 book Superintelligence, Bostrom sounded an alarm about the risks of artificial intelligence. His latest paper, The Vulnerable World Hypothesis, widens the lens to look at other ways ...

  3. How technological progress is making it likelier than ever that ...

    It has also made it easier than ever to cause destruction on a massive scale. And because it's easier for a few destructive actors to use technology to wreak catastrophic damage, humanity may be ...

  4. Emerging technologies and the future of humanity

    Abstract. Emerging technologies are not the danger. Failure of human imagination, optimism, energy, and creativity is the danger. Why the future doesn't need us: Our most powerful 21st-century technologies—robotics, genetic engineering, and nanotech—are threatening to make humans an endangered species.

  5. How Could AI Destroy Humanity?

    But they've been light on the details. Last month, hundreds of well-known people in the world of artificial intelligence signed an open letter warning that A.I. could one day destroy humanity ...

  6. Technological Wild Cards: Existential Risk and a Changing Humanity

    Existential Risk and a Changing Humanity. Humanity has already changed a lot over its lifetime as a species. While our biology is not drastically different than it was 70,000 years ago, the capabilities enabled by our scientific, technological, and sociocultural achievements have changed what it is to be human.

  7. The Scientist: Creator and Destroyer—"Scientists' Warning to Humanity

    Scientists investigate, describe, invent and create. Most advances in medicine, technology and understanding of the living world in the context of the cosmos, are attributable to systematic efforts by expert researchers. However, pervasive toxins, persistent environmental pollution, destructive weaponry and resource depletion are also outcomes of scientific efforts. Furthermore, although we ...

  8. Can science and technology really help solve global problems? A UN

    Science and technology offer part of the solution to climate change, inequality and other global issues, a United Nations official said on Tuesday, spotlighting the enormous potential these fields hold for achieving humanity's common goal, of a poverty and hunger-free world by 2030.

  9. Here's how the world could end—and what we can do about it

    In a dingy apartment building, insulated by layers of hanging rugs, the last family on Earth huddles around a fire, melting a pot of oxygen. Ripped from the sun's warmth by a rogue dark star, the planet has been exiled to the cold outer reaches of the solar system. The lone clan of survivors must venture out into the endless night to harvest ...

  10. Technology: Ushering in world peace or an existential crisis?

    Vincent said that while technology has been used as a force for good, helping to mobilise people, it has also been used "to spread lies, disinformation and hate speech, amplify conspiracy ...

  11. The Advancement of Science and Technology and The Future of Humanity

    The future of humanity depends on the success achieved in advancing knowledge about the Universe, especially 10 major cosmological issues that need to be clarified so that humanity can, with scientific knowledge, adopt measures to protect itself from threats to its survival and seek locations in or outside the solar system that could be ...

  12. Can AI escape our control and destroy us?

    Humanity would cease to exist, predicted the essay's author, with the emergence of superintelligence, or AI that surpasses the human intellect in a broad array of areas.

  13. How science will save the world

    Science is humanity's way of understanding the universe, which allows us to predict the consequences of actions, and ultimately allows us to enhance our lives. We live on a small planet that will soon be inhabited by 8 billion people. To do this successfully, we're going to need science to solve the problems that will arise when so many ...

  14. 3. Humanity is at a precipice; its future is at stake

    Humanity is at a precipice; its future is at stake. By , Janna Anderson and Lee Rainie. The following sections share selections of comments from technology experts and futurists who elaborate on the ways internet use has shaped humanity over the past 50 years and consider the potential future of digital life.

  15. Science and Technology: Impact on Human Life Essay

    Conclusion. In general, science and traditions are two integral elements of human life. People strive to make their unique contributions to technology and invent the devices that facilitate human activities. At the same time, they never neglect respect for traditions and cultural diversity.

  16. The world's plan to make humanity sustainable is failing. Science can

    20 June 2023. The world's plan to make humanity sustainable is failing. Science can do more to save it. There is no planet B, and the UN's Sustainable Development Goals are heading for the ...

  17. How Is Technology Transforming Humanity? Q&A with Darden Professor

    Batten Institute. Technology is reshaping the way we live, work, play and interact. Darden Professor Roshni Raveendhran, whose research sits at the intersection of psychology and technology, shares her insights about the opportunities novel technologies, such as artificial intelligence, create for humanity and the risks they pose.

  18. Can science and technology really help solve global problems? A UN

    "To truly leverage the benefits of science and technology for sustainable development, we need to prioritize solutions that are pro-poor and equitable," Mr. Liu said. "Only in this way can ...

  19. Working together to address global issues: Science and technology and

    Nowadays the human community is facing many global challenges, such as food safety, water pollution and climate change. ... policymakers and the public. Science and technology, as a global public good, can directly or indirectly contribute to sustainable development. ... She is the author of more than 40 papers and has published books such as ...

  20. Science as a catalyst for peace and development

    This World Science Day for Peace and Development, celebrated annually on 10 November, is themed "Open science, leaving no one behind". Open science is the movement to make scientific research and dissemination accessible to all levels of society, amateur or professional. One way that open science could lead to a sustainable future is by ...

  21. Full article: The contrasting roles of science and technology in

    2. The wider science and technology convergence. Science is about discovering, understanding, explaining and predicting patterns in natural phenomena, producing more accurate explanations of how the natural world works (Bertolaso, Citation 2013; Robson & McCartan, Citation 2016), regardless of potential applications.It is the result of deep curiosity and its goal is the pursuit of knowledge ...