3D printing, E-cigarettes among the most important inventions of the 21st century

The human race has always innovated, and in a relatively short time went from building fires and making stone-tipped arrows to creating smartphone apps and autonomous robots. Today, technological progress will undoubtedly continue to change the way we work, live, and survive in the coming decades.

Since the beginning of the new millennium, the world has witnessed the emergence of social media, smartphones, self-driving cars, and autonomous flying vehicles. There have also been huge leaps in energy storage, artificial intelligence, and medical science. Men and women have mapped the human genome and are grappling with the ramifications of biotechnology and gene editing. 

We are facing immense challenges in global warming and food security, among many other issues. While human innovation has contributed to many of the problems we are facing, it is also human innovation and ingenuity that can help humanity deal with these issues. These are 21 strategies that could avert climate disaster . 

Get ready: These are the best vehicles coming in 2020

Small-business headaches: Ruling on sales tax creates expenses for some small-business owners

24/7 Wall St. examined media reports and other sources on the latest far-reaching innovations to find some of the most important 21st-century inventions. In some cases, though there were some precursor research and ancillary technologies before 2001, the innovation did not become available to the public until this century. This list focuses on innovations (such as touch screen glass) that support products rather than the specific products themselves (like the iPhone). 

It remains to be seen if all the technology on this list will continue to have an impact throughout the century. Legislation in the United States may limit the longevity of e-cigarettes, for example. But some of the inventions of the last 20 years will likely have staying power for the foreseeable future. Here are some inventions that are hundreds of years old but are still widely used today .

1. 3D printing

Most inventions come as a result of previous ideas and concepts, and 3D printing is no different. The earliest application of the layering method used by today's 3D printers took place in the manufacture of topographical maps in the late 19th century, and 3D printing as we know it began in 1980.

The convergence of cheaper manufacturing methods and open-source software, however, has led to a revolution of 3D printing in recent years. Today, the technology is being used in the production of everything from lower-cost car parts to bridges to less painful ballet slippers and it is even considered for artificial organs.

2. E-cigarettes

While components of the technology have existed for decades, the first modern e-cigarette was introduced in 2006. Since then, the devices have become wildly popular as an alternative to traditional cigarettes, and new trends, such as the use of flavored juice, have contributed to the success of companies like Juul.

Recent studies have shown that there remains a great deal of uncertainty and risk surrounding the devices, with an increasing number of deaths and injuries linked to vaping. In early 2020, the FDA issued a widespread ban on many flavors of cartridge-based e-cigarettes, in part because those flavors are especially popular with children and younger adults.

3. Augmented reality

Augmented reality, in which digital graphics are overlaid onto live footage to convey information in real time, has been around for a while. Only recently, however, following the arrival of more powerful computing hardware and the creation of an open source video tracking software library known as ARToolKit that the technology has really taken off.

Smartphone apps like the Pokémon Go game and Snapchat filters are just two small popular examples of modern augmented reality applications. The technology is being adopted as a tool in manufacturing, health care, travel, fashion, and education.

4. Birth control patch

The early years of the millennia have brought about an innovation in family planning, albeit one that is still focused only on women and does nothing to protect against sexually transmitted infections. Still, the birth control patch was first released in the United States in 2002 and has made it much easier for women to prevent unintended pregnancies. The plastic patch contains the same estrogen and progesterone hormones found in birth control pills and delivers them in the same manner as nicotine patches do to help people quit tobacco products.

5. Blockchain

You've likely heard about it even if you don't fully understand it. The simplest explanation of blockchain is that it is an incorruptible way to record transactions between parties – a shared digital ledger that parties can only add to and that is transparent to all members of a peer-to-peer network where the blockchain is logged and stored.

The technology was first deployed in 2008 to create Bitcoin, the first decentralized cryptocurrency, but it has since been adopted by the financial sector and other industries for myriad uses, including money transfers, supply chain monitoring, and food safety.

6. Capsule endoscopy

Advancements in light emitting electrodes, image sensors, and optical design in the '90s led to the emergence of capsule endoscopy, first used in patients in 2001. The technology uses a tiny wireless camera the size of a vitamin pill that the patient swallows.

As the capsule traverses the digestive system, doctors can examine the gastrointestinal system in a far less intrusive manner. Capsule endoscopy can be used to identify the source of internal bleeding, inflammations of the bowel ulcers, and cancerous tumors.

7. Modern artificial pancreas

More formally known as closed-loop insulin delivery system, the artificial pancreas has been around since the late '70s, but the first versions were the size of a filing cabinet. In recent years, the artificial pancreas, used primarily to treat type 1 diabetes, became portable. The first artificial pancreas (the modern, portable kind) was approved for use in the United States in 2016.

The system continuously monitors blood glucose levels, calculates the amount of insulin required, and automatically delivers it through a small pump. British studies have shown that patients using these devices spent more time in their ideal glucose-level range. In December 2019, the FDA approved an even more advanced version of the artificial pancreas, called Control-IQ, developed by UVA.

Dear GM: Don't revive Hummer. Focus on Cadillac instead.

8. E-readers

Sony was the first company to release an e-reader using a so-called microencapsulated electrophoretic display, commonly referred to as e-ink. E-ink technology, which mimics ink on paper that is easy on the eyes and consumes less power, had been around since the '70s (and improved in the '90s), but the innovation of e-readers had to wait until after the broader demand for e-books emerged. Sony was quickly overtaken by Amazon's Kindle after its 2007 debut. The popularity of e-readers has declined with the emergence of tablets and smartphones, but they still command loyalty from bookworms worldwide.

9. Gene editing

Researchers from the University of California, Berkeley and a separate team from Harvard and the Broad Institute independently discovered in 2012 that a bacterial immune system known as CRISPR (an acronym for clustered regularly interspaced short palindromic repeats) could be used as a powerful gene-editing tool to make detailed changes to any organism's DNA. This discovery heralded a new era in biotechnology.

The discovery has the potential to eradicate diseases by altering the genes in mice and mosquitoes to combat the spread of Lyme disease and malaria but is also raising ethical questions, especially with regards to human gene editing such as for reproductive purposes.

10. High-density battery packs

Tesla electric cars have received so much attention largely because of their batteries. The batteries, located underneath the passenger cabin, consist of thousands of high-density lithium ion cells, each barely larger than a standard AA battery, nestled into a large, heavy battery pack that also offers Tesla electric cars a road-gripping low center of gravity and structural support.

The brainchild of Tesla co-founder J.B. Straubel, these battery modules pack more of a punch than standard (and cheaper) electric car batteries. These packs are also being used in residential, commercial, and grid-scale energy storage devices.

11. Digital assistants

One of the biggest technology trends in recent years has been smart home technology, which can now be found in everyday consumer devices like door locks, light bulbs, and kitchen appliances. The key piece of technology that has helped make all this possible is the digital assistant. Apple was the first major tech company to introduce a virtual assistant called Siri, in 2011, for iOS.

Other digital assistants, such as Microsoft's Cortana and Amazon's Alexa, have since entered the market. The assistants gained another level of popularity when tech companies introduced smart speakers. Notably, Google Home and Amazon's Echo can now be found in millions of homes, with an ever-growing range of applications.

12. Robot heart

Artificial hearts have been around for some time. They are mechanical devices connected to the actual heart or implanted in the chest to assist or substitute a heart that is failing. Abiomed, a Danvers, Massachusetts-based company, developed a robot heart called AbioCor, a self-contained apparatus made of plastic and titanium.

AbioCor is a self-contained unit with the exception of a wireless battery pack that is attached to the wrist. Robert Tools, a technical librarian with congestive heart failure, received the first one on July 2, 2001.

13. Retinal implant

When he was a medical student, Dr. Mark Humayun watched his grandmother gradually lose her vision. The ophthalmologist and bioengineer focused on finding a solution to what causes blindness. He collaborated with Dr. James Weiland, a colleague at the USC Gayle and Edward Roski Eye Institute, and other experts to create the Argus II.

The Argus II is a retinal prosthesis device that is considered to be a breakthrough for those suffering from retinitis pigmentosa, an inherited retinal degenerative condition that can lead to blindness. The condition afflicts 1.5 million people worldwide. The device was approved by the U.S. Food and Drug Administration in 2013.

14. Mobile operating systems

Mobile operating systems for smartphones and other portable gadgets have enabled the proliferation of smartphones and other mobile gadgets thanks to their intuitive user interfaces and seemingly endless app options. Mobile operating systems have become the most consumer-facing of computer operating systems. When Google first purchased Android Inc. in 2005, the operating system was just two years old, and the first iPhone (with its iOS) was still two years from its commercial debut.

15. Multi-use rockets

Billionaire entrepreneur Elon Musk may not necessarily be remembered for his contributions to electric cars innovations, but rather for his contributions to space exploration. Musk's private space exploration company, SpaceX, has developed rockets that can be recovered and reused in other launches – a more efficient and cheaper alternative to the method of using the rockets only once and letting them fall into the ocean.

On March 30, 2017, SpaceX became the first to deploy one of these used rockets, the Falcon 9. Blue Origin, a space-transport company founded by Amazon.com's Jeff Bezos, has launched its own reusable rocket.

16. Online streaming

Online streaming would not be possible without the convergence of widespread broadband internet access and cloud computing data centers used to store content and direct web traffic. While internet-based live streaming has been around almost since the internet was broadly adopted in the '90s, it was not until the mid-2000s that the internet could handle the delivery of streaming media to large audiences. Online streaming is posing an existential threat to existing models of delivering media entertainment, such as cable television and movie theaters.

17. Robotic exoskeletons

Ever since researchers at the University of California, Berkeley, created in 2003 a robotic device that attaches to the lower back to augment strength in humans, the demand for robotic exoskeletons for physical rehabilitation has increased, and manufacturing has taken off.

Wearable exoskeletons are increasingly helping people with mobility issues (particularly lower body paralysis), and are being used in factories. Ford Motor Company, for example, has used an exoskeleton vest that helps auto assemblers with repetitive tasks in order to lessen the wear and tear on shoulders and arms.

18. Small satellites

As modern electronics devices have gotten smaller, so, too, have orbital satellites, which companies, governments, and organizations use to gather scientific data, collect images of Earth, and for telecommunications and intelligence purposes. These tiny, low-cost orbital devices fall into different categories by weight, but one of the most common is the shoebox-sized CubeSat. As of October 2019, over 2,400 satellites weighing between 1 kg (2.2 lbs) and 40 kgs (88 lbs) have been launched, according to Nanosats Database.

19. Solid-state lidar

Lidar is an acronym that stands for light detection and ranging, and is also a portmanteau of the words "light" and "radar." The technology today is most often used in self-driving cars. Like radars, which use radio waves to bounce off objects and determine their distance, lidar uses a laser pulse to do the same.

By sending enough lasers in rotation, it can create a constantly updated high-resolution image map of the surrounding environment. The next steps in the technology would include smaller and cheaper lidar sensors, and especially solid state ones – no spinning tops on the cars.

20. Tokenization

If you have ever used the chip embedded in a credit or debit card to make a payment by tapping rather than swiping, then you have benefited from the heightened security of tokenization. This data security technology replaces sensitive data with an equivalent randomized number †known as a token †that is used only once per transaction and has no value to would-be hackers and identity thieves attempting to intercept transaction data as it travels from sender to recipient. Social media site classmates.com was reportedly the first to use tokenization in 2001 to protect its subscribers' sensitive data. Tokenization is also being touted as a way to prevent hackers from interfering with driverless cars.

21. Touchscreen glass

Super-thin, chemically strengthened glass is a key component of the touchscreen world. This sturdy, transparent material is what helps keep your iPad or Samsung smartphone from shattering into pieces at the slightest drop. Even if these screens crack, in most cases the damage is cosmetic and the gadget still works.

Corning Inc., already a leader in the production of treated glass used in automobiles, was asked by Apple to develop 1.3-mm treated glass for its iPhone, which debuted in 2007. Corning's Gorilla Glass is still the most well known, though other brands exist in the marketplace.

24/7 Wall Street is a USA TODAY content partner offering financial news and commentary. Its content is produced independently of USA TODAY.

Oxford Martin School logo

Technology over the long run: zoom out to see how dramatically the world can change within a lifetime

It is easy to underestimate how much the world can change within a lifetime. considering how dramatically the world has changed can help us see how different the world could be in a few years or decades..

Technology can change the world in ways that are unimaginable until they happen. Switching on an electric light would have been unimaginable for our medieval ancestors. In their childhood, our grandparents would have struggled to imagine a world connected by smartphones and the Internet.

Similarly, it is hard for us to imagine the arrival of all those technologies that will fundamentally change the world we are used to.

We can remind ourselves that our own future might look very different from the world today by looking back at how rapidly technology has changed our world in the past. That’s what this article is about.

One insight I take away from this long-term perspective is how unusual our time is. Technological change was extremely slow in the past – the technologies that our ancestors got used to in their childhood were still central to their lives in their old age. In stark contrast to those days, we live in a time of extraordinarily fast technological change. For recent generations, it was common for technologies that were unimaginable in their youth to become common later in life.

The long-run perspective on technological change

The big visualization offers a long-term perspective on the history of technology. 1

The timeline begins at the center of the spiral. The first use of stone tools, 3.4 million years ago, marks the beginning of this history of technology. 2 Each turn of the spiral represents 200,000 years of history. It took 2.4 million years – 12 turns of the spiral – for our ancestors to control fire and use it for cooking. 3

To be able to visualize the inventions in the more recent past – the last 12,000 years – I had to unroll the spiral. I needed more space to be able to show when agriculture, writing, and the wheel were invented. During this period, technological change was faster, but it was still relatively slow: several thousand years passed between each of these three inventions.

From 1800 onwards, I stretched out the timeline even further to show the many major inventions that rapidly followed one after the other.

The long-term perspective that this chart provides makes it clear just how unusually fast technological change is in our time.

You can use this visualization to see how technology developed in particular domains. Follow, for example, the history of communication: from writing to paper, to the printing press, to the telegraph, the telephone, the radio, all the way to the Internet and smartphones.

Or follow the rapid development of human flight. In 1903, the Wright brothers took the first flight in human history (they were in the air for less than a minute), and just 66 years later, we landed on the moon. Many people saw both within their lifetimes: the first plane and the moon landing.

This large visualization also highlights the wide range of technology’s impact on our lives. It includes extraordinarily beneficial innovations, such as the vaccine that allowed humanity to eradicate smallpox , and it includes terrible innovations, like the nuclear bombs that endanger the lives of all of us .

What will the next decades bring?

The red timeline reaches up to the present and then continues in green into the future. Many children born today, even without further increases in life expectancy, will live well into the 22nd century.

New vaccines, progress in clean, low-carbon energy, better cancer treatments – a range of future innovations could very much improve our living conditions and the environment around us. But, as I argue in a series of articles , there is one technology that could even more profoundly change our world: artificial intelligence (AI).

One reason why artificial intelligence is such an important innovation is that intelligence is the main driver of innovation itself. This fast-paced technological change could speed up even more if it’s driven not only by humanity’s intelligence but also by artificial intelligence. If this happens, the change currently stretched out over decades might happen within a very brief time span of just a year. Possibly even faster. 4

I think AI technology could have a fundamentally transformative impact on our world. In many ways, it is already changing our world, as I documented in this companion article . As this technology becomes more capable in the years and decades to come, it can give immense power to those who control it (and it poses the risk that it could escape our control entirely).

Such systems might seem hard to imagine today, but AI technology is advancing quickly. Many AI experts believe there is a real chance that human-level artificial intelligence will be developed within the next decades, as I documented in this article .

legacy-wordpress-upload

Technology will continue to change the world – we should all make sure that it changes it for the better

What is familiar to us today – photography, the radio, antibiotics, the Internet, or the International Space Station circling our planet – was unimaginable to our ancestors just a few generations ago. If your great-great-great grandparents could spend a week with you, they would be blown away by your everyday life.

What I take away from this history is that I will likely see technologies in my lifetime that appear unimaginable to me today.

In addition to this trend towards increasingly rapid innovation, there is a second long-run trend. Technology has become increasingly powerful. While our ancestors wielded stone tools, we are building globe-spanning AI systems and technologies that can edit our genes.

Because of the immense power that technology gives those who control it, there is little that is as important as the question of which technologies get developed during our lifetimes. Therefore, I think it is a mistake to leave the question about the future of technology to the technologists. Which technologies are controlled by whom is one of the most important political questions of our time because of the enormous power these technologies convey to those who control them.

We all should strive to gain the knowledge we need to contribute to an intelligent debate about the world we want to live in. To a large part, this means gaining knowledge and wisdom on the question of which technologies we want.

Acknowledgments: I would like to thank my colleagues Hannah Ritchie, Bastian Herre, Natasha Ahuja, Edouard Mathieu, Daniel Bachler, Charlie Giattino, and Pablo Rosado for their helpful comments on drafts of this essay and the visualization. Thanks also to Lizka Vaintrob and Ben Clifford for the conversation that initiated this visualization.

Appendix: About the choice of visualization in this article

The recent speed of technological change makes it difficult to picture the history of technology in one visualization. When you visualize this development on a linear timeline, then most of the timeline is almost empty, while all the action is crammed into the right corner:

Linear version of the spiral chart

In my large visualization here, I tried to avoid this problem and instead show the long history of technology in a way that lets you see when each technological breakthrough happened and how, within the last millennia, there was a continuous acceleration of technological change.

The recent speed of technological change makes it difficult to picture the history of technology in one visualization. In the appendix, I show how this would look if it were linear.

It is, of course, difficult to assess when exactly the first stone tools were used.

The research by McPherron et al. (2010) suggested that it was at least 3.39 million years ago. This is based on two fossilized bones found in Dikika in Ethiopia, which showed “stone-tool cut marks for flesh removal and percussion marks for marrow access”. These marks were interpreted as being caused by meat consumption and provide the first evidence that one of our ancestors, Australopithecus afarensis, used stone tools.

The research by Harmand et al. (2015) provided evidence for stone tool use in today’s Kenya 3.3 million years ago.

References:

McPherron et al. (2010) – Evidence for stone-tool-assisted consumption of animal tissues before 3.39 million years ago at Dikika, Ethiopia . Published in Nature.

Harmand et al. (2015) – 3.3-million-year-old stone tools from Lomekwi 3, West Turkana, Kenya . Published in Nature.

Evidence for controlled fire use approximately 1 million years ago is provided by Berna et al. (2012) Microstratigraphic evidence of in situ fire in the Acheulean strata of Wonderwerk Cave, Northern Cape province, South Africa , published in PNAS.

The authors write: “The ability to control fire was a crucial turning point in human evolution, but the question of when hominins first developed this ability still remains. Here we show that micromorphological and Fourier transform infrared microspectroscopy (mFTIR) analyses of intact sediments at the site of Wonderwerk Cave, Northern Cape province, South Africa, provide unambiguous evidence—in the form of burned bone and ashed plant remains—that burning took place in the cave during the early Acheulean occupation, approximately 1.0 Ma. To the best of our knowledge, this is the earliest secure evidence for burning in an archaeological context.”

This is what authors like Holden Karnofsky called ‘Process for Automating Scientific and Technological Advancement’ or PASTA. Some recent developments go in this direction: DeepMind’s AlphaFold helped to make progress on one of the large problems in biology, and they have also developed an AI system that finds new algorithms that are relevant to building a more powerful AI.

Cite this work

Our articles and data visualizations rely on work from many different people and organizations. When citing this article, please also cite the underlying data sources. This article can be cited as:

BibTeX citation

Reuse this work freely

All visualizations, data, and code produced by Our World in Data are completely open access under the Creative Commons BY license . You have the permission to use, distribute, and reproduce these in any medium, provided the source and authors are credited.

The data produced by third parties and made available by Our World in Data is subject to the license terms from the original third-party authors. We will always indicate the original source of the data in our documentation, so you should always check the license of any such third-party data before use and redistribution.

All of our charts can be embedded in any site.

Our World in Data is free and accessible for everyone.

Help us do this work by making a donation.

How artificial intelligence is transforming the world

Subscribe to the center for technology innovation newsletter, darrell m. west and darrell m. west senior fellow - center for technology innovation , douglas dillon chair in governmental studies john r. allen john r. allen.

April 24, 2018

Artificial intelligence (AI) is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decision making—and already it is transforming every walk of life. In this report, Darrell West and John Allen discuss AI’s application across a variety of sectors, address issues in its development, and offer recommendations for getting the most out of AI while still protecting important human values.

Table of Contents I. Qualities of artificial intelligence II. Applications in diverse sectors III. Policy, regulatory, and ethical issues IV. Recommendations V. Conclusion

  • 49 min read

Most people are not very familiar with the concept of artificial intelligence (AI). As an illustration, when 1,500 senior business leaders in the United States in 2017 were asked about AI, only 17 percent said they were familiar with it. 1 A number of them were not sure what it was or how it would affect their particular companies. They understood there was considerable potential for altering business processes, but were not clear how AI could be deployed within their own organizations.

Despite its widespread lack of familiarity, AI is a technology that is transforming every walk of life. It is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decisionmaking. Our hope through this comprehensive overview is to explain AI to an audience of policymakers, opinion leaders, and interested observers, and demonstrate how AI already is altering the world and raising important questions for society, the economy, and governance.

In this paper, we discuss novel applications in finance, national security, health care, criminal justice, transportation, and smart cities, and address issues such as data access problems, algorithmic bias, AI ethics and transparency, and legal liability for AI decisions. We contrast the regulatory approaches of the U.S. and European Union, and close by making a number of recommendations for getting the most out of AI while still protecting important human values. 2

In order to maximize AI benefits, we recommend nine steps for going forward:

  • Encourage greater data access for researchers without compromising users’ personal privacy,
  • invest more government funding in unclassified AI research,
  • promote new models of digital education and AI workforce development so employees have the skills needed in the 21 st -century economy,
  • create a federal AI advisory committee to make policy recommendations,
  • engage with state and local officials so they enact effective policies,
  • regulate broad AI principles rather than specific algorithms,
  • take bias complaints seriously so AI does not replicate historic injustice, unfairness, or discrimination in data or algorithms,
  • maintain mechanisms for human oversight and control, and
  • penalize malicious AI behavior and promote cybersecurity.

Qualities of artificial intelligence

Although there is no uniformly agreed upon definition, AI generally is thought to refer to “machines that respond to stimulation consistent with traditional responses from humans, given the human capacity for contemplation, judgment and intention.” 3  According to researchers Shubhendu and Vijay, these software systems “make decisions which normally require [a] human level of expertise” and help people anticipate problems or deal with issues as they come up. 4 As such, they operate in an intentional, intelligent, and adaptive manner.

Intentionality

Artificial intelligence algorithms are designed to make decisions, often using real-time data. They are unlike passive machines that are capable only of mechanical or predetermined responses. Using sensors, digital data, or remote inputs, they combine information from a variety of different sources, analyze the material instantly, and act on the insights derived from those data. With massive improvements in storage systems, processing speeds, and analytic techniques, they are capable of tremendous sophistication in analysis and decisionmaking.

Artificial intelligence is already altering the world and raising important questions for society, the economy, and governance.

Intelligence

AI generally is undertaken in conjunction with machine learning and data analytics. 5 Machine learning takes data and looks for underlying trends. If it spots something that is relevant for a practical problem, software designers can take that knowledge and use it to analyze specific issues. All that is required are data that are sufficiently robust that algorithms can discern useful patterns. Data can come in the form of digital information, satellite imagery, visual information, text, or unstructured data.

Adaptability

AI systems have the ability to learn and adapt as they make decisions. In the transportation area, for example, semi-autonomous vehicles have tools that let drivers and vehicles know about upcoming congestion, potholes, highway construction, or other possible traffic impediments. Vehicles can take advantage of the experience of other vehicles on the road, without human involvement, and the entire corpus of their achieved “experience” is immediately and fully transferable to other similarly configured vehicles. Their advanced algorithms, sensors, and cameras incorporate experience in current operations, and use dashboards and visual displays to present information in real time so human drivers are able to make sense of ongoing traffic and vehicular conditions. And in the case of fully autonomous vehicles, advanced systems can completely control the car or truck, and make all the navigational decisions.

Related Content

Jack Karsten, Darrell M. West

October 26, 2015

Makada Henry-Nickie

November 16, 2017

Sunil Johal, Daniel Araya

February 28, 2017

Applications in diverse sectors

AI is not a futuristic vision, but rather something that is here today and being integrated with and deployed into a variety of sectors. This includes fields such as finance, national security, health care, criminal justice, transportation, and smart cities. There are numerous examples where AI already is making an impact on the world and augmenting human capabilities in significant ways. 6

One of the reasons for the growing role of AI is the tremendous opportunities for economic development that it presents. A project undertaken by PriceWaterhouseCoopers estimated that “artificial intelligence technologies could increase global GDP by $15.7 trillion, a full 14%, by 2030.” 7 That includes advances of $7 trillion in China, $3.7 trillion in North America, $1.8 trillion in Northern Europe, $1.2 trillion for Africa and Oceania, $0.9 trillion in the rest of Asia outside of China, $0.7 trillion in Southern Europe, and $0.5 trillion in Latin America. China is making rapid strides because it has set a national goal of investing $150 billion in AI and becoming the global leader in this area by 2030.

Meanwhile, a McKinsey Global Institute study of China found that “AI-led automation can give the Chinese economy a productivity injection that would add 0.8 to 1.4 percentage points to GDP growth annually, depending on the speed of adoption.” 8 Although its authors found that China currently lags the United States and the United Kingdom in AI deployment, the sheer size of its AI market gives that country tremendous opportunities for pilot testing and future development.

Investments in financial AI in the United States tripled between 2013 and 2014 to a total of $12.2 billion. 9 According to observers in that sector, “Decisions about loans are now being made by software that can take into account a variety of finely parsed data about a borrower, rather than just a credit score and a background check.” 10 In addition, there are so-called robo-advisers that “create personalized investment portfolios, obviating the need for stockbrokers and financial advisers.” 11 These advances are designed to take the emotion out of investing and undertake decisions based on analytical considerations, and make these choices in a matter of minutes.

A prominent example of this is taking place in stock exchanges, where high-frequency trading by machines has replaced much of human decisionmaking. People submit buy and sell orders, and computers match them in the blink of an eye without human intervention. Machines can spot trading inefficiencies or market differentials on a very small scale and execute trades that make money according to investor instructions. 12 Powered in some places by advanced computing, these tools have much greater capacities for storing information because of their emphasis not on a zero or a one, but on “quantum bits” that can store multiple values in each location. 13 That dramatically increases storage capacity and decreases processing times.

Fraud detection represents another way AI is helpful in financial systems. It sometimes is difficult to discern fraudulent activities in large organizations, but AI can identify abnormalities, outliers, or deviant cases requiring additional investigation. That helps managers find problems early in the cycle, before they reach dangerous levels. 14

National security

AI plays a substantial role in national defense. Through its Project Maven, the American military is deploying AI “to sift through the massive troves of data and video captured by surveillance and then alert human analysts of patterns or when there is abnormal or suspicious activity.” 15 According to Deputy Secretary of Defense Patrick Shanahan, the goal of emerging technologies in this area is “to meet our warfighters’ needs and to increase [the] speed and agility [of] technology development and procurement.” 16

Artificial intelligence will accelerate the traditional process of warfare so rapidly that a new term has been coined: hyperwar.

The big data analytics associated with AI will profoundly affect intelligence analysis, as massive amounts of data are sifted in near real time—if not eventually in real time—thereby providing commanders and their staffs a level of intelligence analysis and productivity heretofore unseen. Command and control will similarly be affected as human commanders delegate certain routine, and in special circumstances, key decisions to AI platforms, reducing dramatically the time associated with the decision and subsequent action. In the end, warfare is a time competitive process, where the side able to decide the fastest and move most quickly to execution will generally prevail. Indeed, artificially intelligent intelligence systems, tied to AI-assisted command and control systems, can move decision support and decisionmaking to a speed vastly superior to the speeds of the traditional means of waging war. So fast will be this process, especially if coupled to automatic decisions to launch artificially intelligent autonomous weapons systems capable of lethal outcomes, that a new term has been coined specifically to embrace the speed at which war will be waged: hyperwar.

While the ethical and legal debate is raging over whether America will ever wage war with artificially intelligent autonomous lethal systems, the Chinese and Russians are not nearly so mired in this debate, and we should anticipate our need to defend against these systems operating at hyperwar speeds. The challenge in the West of where to position “humans in the loop” in a hyperwar scenario will ultimately dictate the West’s capacity to be competitive in this new form of conflict. 17

Just as AI will profoundly affect the speed of warfare, the proliferation of zero day or zero second cyber threats as well as polymorphic malware will challenge even the most sophisticated signature-based cyber protection. This forces significant improvement to existing cyber defenses. Increasingly, vulnerable systems are migrating, and will need to shift to a layered approach to cybersecurity with cloud-based, cognitive AI platforms. This approach moves the community toward a “thinking” defensive capability that can defend networks through constant training on known threats. This capability includes DNA-level analysis of heretofore unknown code, with the possibility of recognizing and stopping inbound malicious code by recognizing a string component of the file. This is how certain key U.S.-based systems stopped the debilitating “WannaCry” and “Petya” viruses.

Preparing for hyperwar and defending critical cyber networks must become a high priority because China, Russia, North Korea, and other countries are putting substantial resources into AI. In 2017, China’s State Council issued a plan for the country to “build a domestic industry worth almost $150 billion” by 2030. 18 As an example of the possibilities, the Chinese search firm Baidu has pioneered a facial recognition application that finds missing people. In addition, cities such as Shenzhen are providing up to $1 million to support AI labs. That country hopes AI will provide security, combat terrorism, and improve speech recognition programs. 19 The dual-use nature of many AI algorithms will mean AI research focused on one sector of society can be rapidly modified for use in the security sector as well. 20

Health care

AI tools are helping designers improve computational sophistication in health care. For example, Merantix is a German company that applies deep learning to medical issues. It has an application in medical imaging that “detects lymph nodes in the human body in Computer Tomography (CT) images.” 21 According to its developers, the key is labeling the nodes and identifying small lesions or growths that could be problematic. Humans can do this, but radiologists charge $100 per hour and may be able to carefully read only four images an hour. If there were 10,000 images, the cost of this process would be $250,000, which is prohibitively expensive if done by humans.

What deep learning can do in this situation is train computers on data sets to learn what a normal-looking versus an irregular-appearing lymph node is. After doing that through imaging exercises and honing the accuracy of the labeling, radiological imaging specialists can apply this knowledge to actual patients and determine the extent to which someone is at risk of cancerous lymph nodes. Since only a few are likely to test positive, it is a matter of identifying the unhealthy versus healthy node.

AI has been applied to congestive heart failure as well, an illness that afflicts 10 percent of senior citizens and costs $35 billion each year in the United States. AI tools are helpful because they “predict in advance potential challenges ahead and allocate resources to patient education, sensing, and proactive interventions that keep patients out of the hospital.” 22

Criminal justice

AI is being deployed in the criminal justice area. The city of Chicago has developed an AI-driven “Strategic Subject List” that analyzes people who have been arrested for their risk of becoming future perpetrators. It ranks 400,000 people on a scale of 0 to 500, using items such as age, criminal activity, victimization, drug arrest records, and gang affiliation. In looking at the data, analysts found that youth is a strong predictor of violence, being a shooting victim is associated with becoming a future perpetrator, gang affiliation has little predictive value, and drug arrests are not significantly associated with future criminal activity. 23

Judicial experts claim AI programs reduce human bias in law enforcement and leads to a fairer sentencing system. R Street Institute Associate Caleb Watney writes:

Empirically grounded questions of predictive risk analysis play to the strengths of machine learning, automated reasoning and other forms of AI. One machine-learning policy simulation concluded that such programs could be used to cut crime up to 24.8 percent with no change in jailing rates, or reduce jail populations by up to 42 percent with no increase in crime rates. 24

However, critics worry that AI algorithms represent “a secret system to punish citizens for crimes they haven’t yet committed. The risk scores have been used numerous times to guide large-scale roundups.” 25 The fear is that such tools target people of color unfairly and have not helped Chicago reduce the murder wave that has plagued it in recent years.

Despite these concerns, other countries are moving ahead with rapid deployment in this area. In China, for example, companies already have “considerable resources and access to voices, faces and other biometric data in vast quantities, which would help them develop their technologies.” 26 New technologies make it possible to match images and voices with other types of information, and to use AI on these combined data sets to improve law enforcement and national security. Through its “Sharp Eyes” program, Chinese law enforcement is matching video images, social media activity, online purchases, travel records, and personal identity into a “police cloud.” This integrated database enables authorities to keep track of criminals, potential law-breakers, and terrorists. 27 Put differently, China has become the world’s leading AI-powered surveillance state.

Transportation

Transportation represents an area where AI and machine learning are producing major innovations. Research by Cameron Kerry and Jack Karsten of the Brookings Institution has found that over $80 billion was invested in autonomous vehicle technology between August 2014 and June 2017. Those investments include applications both for autonomous driving and the core technologies vital to that sector. 28

Autonomous vehicles—cars, trucks, buses, and drone delivery systems—use advanced technological capabilities. Those features include automated vehicle guidance and braking, lane-changing systems, the use of cameras and sensors for collision avoidance, the use of AI to analyze information in real time, and the use of high-performance computing and deep learning systems to adapt to new circumstances through detailed maps. 29

Light detection and ranging systems (LIDARs) and AI are key to navigation and collision avoidance. LIDAR systems combine light and radar instruments. They are mounted on the top of vehicles that use imaging in a 360-degree environment from a radar and light beams to measure the speed and distance of surrounding objects. Along with sensors placed on the front, sides, and back of the vehicle, these instruments provide information that keeps fast-moving cars and trucks in their own lane, helps them avoid other vehicles, applies brakes and steering when needed, and does so instantly so as to avoid accidents.

Advanced software enables cars to learn from the experiences of other vehicles on the road and adjust their guidance systems as weather, driving, or road conditions change. This means that software is the key—not the physical car or truck itself.

Since these cameras and sensors compile a huge amount of information and need to process it instantly to avoid the car in the next lane, autonomous vehicles require high-performance computing, advanced algorithms, and deep learning systems to adapt to new scenarios. This means that software is the key, not the physical car or truck itself. 30 Advanced software enables cars to learn from the experiences of other vehicles on the road and adjust their guidance systems as weather, driving, or road conditions change. 31

Ride-sharing companies are very interested in autonomous vehicles. They see advantages in terms of customer service and labor productivity. All of the major ride-sharing companies are exploring driverless cars. The surge of car-sharing and taxi services—such as Uber and Lyft in the United States, Daimler’s Mytaxi and Hailo service in Great Britain, and Didi Chuxing in China—demonstrate the opportunities of this transportation option. Uber recently signed an agreement to purchase 24,000 autonomous cars from Volvo for its ride-sharing service. 32

However, the ride-sharing firm suffered a setback in March 2018 when one of its autonomous vehicles in Arizona hit and killed a pedestrian. Uber and several auto manufacturers immediately suspended testing and launched investigations into what went wrong and how the fatality could have occurred. 33 Both industry and consumers want reassurance that the technology is safe and able to deliver on its stated promises. Unless there are persuasive answers, this accident could slow AI advancements in the transportation sector.

Smart cities

Metropolitan governments are using AI to improve urban service delivery. For example, according to Kevin Desouza, Rashmi Krishnamurthy, and Gregory Dawson:

The Cincinnati Fire Department is using data analytics to optimize medical emergency responses. The new analytics system recommends to the dispatcher an appropriate response to a medical emergency call—whether a patient can be treated on-site or needs to be taken to the hospital—by taking into account several factors, such as the type of call, location, weather, and similar calls. 34

Since it fields 80,000 requests each year, Cincinnati officials are deploying this technology to prioritize responses and determine the best ways to handle emergencies. They see AI as a way to deal with large volumes of data and figure out efficient ways of responding to public requests. Rather than address service issues in an ad hoc manner, authorities are trying to be proactive in how they provide urban services.

Cincinnati is not alone. A number of metropolitan areas are adopting smart city applications that use AI to improve service delivery, environmental planning, resource management, energy utilization, and crime prevention, among other things. For its smart cities index, the magazine Fast Company ranked American locales and found Seattle, Boston, San Francisco, Washington, D.C., and New York City as the top adopters. Seattle, for example, has embraced sustainability and is using AI to manage energy usage and resource management. Boston has launched a “City Hall To Go” that makes sure underserved communities receive needed public services. It also has deployed “cameras and inductive loops to manage traffic and acoustic sensors to identify gun shots.” San Francisco has certified 203 buildings as meeting LEED sustainability standards. 35

Through these and other means, metropolitan areas are leading the country in the deployment of AI solutions. Indeed, according to a National League of Cities report, 66 percent of American cities are investing in smart city technology. Among the top applications noted in the report are “smart meters for utilities, intelligent traffic signals, e-governance applications, Wi-Fi kiosks, and radio frequency identification sensors in pavement.” 36

Policy, regulatory, and ethical issues

These examples from a variety of sectors demonstrate how AI is transforming many walks of human existence. The increasing penetration of AI and autonomous devices into many aspects of life is altering basic operations and decisionmaking within organizations, and improving efficiency and response times.

At the same time, though, these developments raise important policy, regulatory, and ethical issues. For example, how should we promote data access? How do we guard against biased or unfair data used in algorithms? What types of ethical principles are introduced through software programming, and how transparent should designers be about their choices? What about questions of legal liability in cases where algorithms cause harm? 37

The increasing penetration of AI into many aspects of life is altering decisionmaking within organizations and improving efficiency. At the same time, though, these developments raise important policy, regulatory, and ethical issues.

Data access problems

The key to getting the most out of AI is having a “data-friendly ecosystem with unified standards and cross-platform sharing.” AI depends on data that can be analyzed in real time and brought to bear on concrete problems. Having data that are “accessible for exploration” in the research community is a prerequisite for successful AI development. 38

According to a McKinsey Global Institute study, nations that promote open data sources and data sharing are the ones most likely to see AI advances. In this regard, the United States has a substantial advantage over China. Global ratings on data openness show that U.S. ranks eighth overall in the world, compared to 93 for China. 39

But right now, the United States does not have a coherent national data strategy. There are few protocols for promoting research access or platforms that make it possible to gain new insights from proprietary data. It is not always clear who owns data or how much belongs in the public sphere. These uncertainties limit the innovation economy and act as a drag on academic research. In the following section, we outline ways to improve data access for researchers.

Biases in data and algorithms

In some instances, certain AI systems are thought to have enabled discriminatory or biased practices. 40 For example, Airbnb has been accused of having homeowners on its platform who discriminate against racial minorities. A research project undertaken by the Harvard Business School found that “Airbnb users with distinctly African American names were roughly 16 percent less likely to be accepted as guests than those with distinctly white names.” 41

Racial issues also come up with facial recognition software. Most such systems operate by comparing a person’s face to a range of faces in a large database. As pointed out by Joy Buolamwini of the Algorithmic Justice League, “If your facial recognition data contains mostly Caucasian faces, that’s what your program will learn to recognize.” 42 Unless the databases have access to diverse data, these programs perform poorly when attempting to recognize African-American or Asian-American features.

Many historical data sets reflect traditional values, which may or may not represent the preferences wanted in a current system. As Buolamwini notes, such an approach risks repeating inequities of the past:

The rise of automation and the increased reliance on algorithms for high-stakes decisions such as whether someone get insurance or not, your likelihood to default on a loan or somebody’s risk of recidivism means this is something that needs to be addressed. Even admissions decisions are increasingly automated—what school our children go to and what opportunities they have. We don’t have to bring the structural inequalities of the past into the future we create. 43

AI ethics and transparency

Algorithms embed ethical considerations and value choices into program decisions. As such, these systems raise questions concerning the criteria used in automated decisionmaking. Some people want to have a better understanding of how algorithms function and what choices are being made. 44

In the United States, many urban schools use algorithms for enrollment decisions based on a variety of considerations, such as parent preferences, neighborhood qualities, income level, and demographic background. According to Brookings researcher Jon Valant, the New Orleans–based Bricolage Academy “gives priority to economically disadvantaged applicants for up to 33 percent of available seats. In practice, though, most cities have opted for categories that prioritize siblings of current students, children of school employees, and families that live in school’s broad geographic area.” 45 Enrollment choices can be expected to be very different when considerations of this sort come into play.

Depending on how AI systems are set up, they can facilitate the redlining of mortgage applications, help people discriminate against individuals they don’t like, or help screen or build rosters of individuals based on unfair criteria. The types of considerations that go into programming decisions matter a lot in terms of how the systems operate and how they affect customers. 46

For these reasons, the EU is implementing the General Data Protection Regulation (GDPR) in May 2018. The rules specify that people have “the right to opt out of personally tailored ads” and “can contest ‘legal or similarly significant’ decisions made by algorithms and appeal for human intervention” in the form of an explanation of how the algorithm generated a particular outcome. Each guideline is designed to ensure the protection of personal data and provide individuals with information on how the “black box” operates. 47

Legal liability

There are questions concerning the legal liability of AI systems. If there are harms or infractions (or fatalities in the case of driverless cars), the operators of the algorithm likely will fall under product liability rules. A body of case law has shown that the situation’s facts and circumstances determine liability and influence the kind of penalties that are imposed. Those can range from civil fines to imprisonment for major harms. 48 The Uber-related fatality in Arizona will be an important test case for legal liability. The state actively recruited Uber to test its autonomous vehicles and gave the company considerable latitude in terms of road testing. It remains to be seen if there will be lawsuits in this case and who is sued: the human backup driver, the state of Arizona, the Phoenix suburb where the accident took place, Uber, software developers, or the auto manufacturer. Given the multiple people and organizations involved in the road testing, there are many legal questions to be resolved.

In non-transportation areas, digital platforms often have limited liability for what happens on their sites. For example, in the case of Airbnb, the firm “requires that people agree to waive their right to sue, or to join in any class-action lawsuit or class-action arbitration, to use the service.” By demanding that its users sacrifice basic rights, the company limits consumer protections and therefore curtails the ability of people to fight discrimination arising from unfair algorithms. 49 But whether the principle of neutral networks holds up in many sectors is yet to be determined on a widespread basis.

Recommendations

In order to balance innovation with basic human values, we propose a number of recommendations for moving forward with AI. This includes improving data access, increasing government investment in AI, promoting AI workforce development, creating a federal advisory committee, engaging with state and local officials to ensure they enact effective policies, regulating broad objectives as opposed to specific algorithms, taking bias seriously as an AI issue, maintaining mechanisms for human control and oversight, and penalizing malicious behavior and promoting cybersecurity.

Improving data access

The United States should develop a data strategy that promotes innovation and consumer protection. Right now, there are no uniform standards in terms of data access, data sharing, or data protection. Almost all the data are proprietary in nature and not shared very broadly with the research community, and this limits innovation and system design. AI requires data to test and improve its learning capacity. 50 Without structured and unstructured data sets, it will be nearly impossible to gain the full benefits of artificial intelligence.

In general, the research community needs better access to government and business data, although with appropriate safeguards to make sure researchers do not misuse data in the way Cambridge Analytica did with Facebook information. There is a variety of ways researchers could gain data access. One is through voluntary agreements with companies holding proprietary data. Facebook, for example, recently announced a partnership with Stanford economist Raj Chetty to use its social media data to explore inequality. 51 As part of the arrangement, researchers were required to undergo background checks and could only access data from secured sites in order to protect user privacy and security.

In the U.S., there are no uniform standards in terms of data access, data sharing, or data protection. Almost all the data are proprietary in nature and not shared very broadly with the research community, and this limits innovation and system design.

Google long has made available search results in aggregated form for researchers and the general public. Through its “Trends” site, scholars can analyze topics such as interest in Trump, views about democracy, and perspectives on the overall economy. 52 That helps people track movements in public interest and identify topics that galvanize the general public.

Twitter makes much of its tweets available to researchers through application programming interfaces, commonly referred to as APIs. These tools help people outside the company build application software and make use of data from its social media platform. They can study patterns of social media communications and see how people are commenting on or reacting to current events.

In some sectors where there is a discernible public benefit, governments can facilitate collaboration by building infrastructure that shares data. For example, the National Cancer Institute has pioneered a data-sharing protocol where certified researchers can query health data it has using de-identified information drawn from clinical data, claims information, and drug therapies. That enables researchers to evaluate efficacy and effectiveness, and make recommendations regarding the best medical approaches, without compromising the privacy of individual patients.

There could be public-private data partnerships that combine government and business data sets to improve system performance. For example, cities could integrate information from ride-sharing services with its own material on social service locations, bus lines, mass transit, and highway congestion to improve transportation. That would help metropolitan areas deal with traffic tie-ups and assist in highway and mass transit planning.

Some combination of these approaches would improve data access for researchers, the government, and the business community, without impinging on personal privacy. As noted by Ian Buck, the vice president of NVIDIA, “Data is the fuel that drives the AI engine. The federal government has access to vast sources of information. Opening access to that data will help us get insights that will transform the U.S. economy.” 53 Through its Data.gov portal, the federal government already has put over 230,000 data sets into the public domain, and this has propelled innovation and aided improvements in AI and data analytic technologies. 54 The private sector also needs to facilitate research data access so that society can achieve the full benefits of artificial intelligence.

Increase government investment in AI

According to Greg Brockman, the co-founder of OpenAI, the U.S. federal government invests only $1.1 billion in non-classified AI technology. 55 That is far lower than the amount being spent by China or other leading nations in this area of research. That shortfall is noteworthy because the economic payoffs of AI are substantial. In order to boost economic development and social innovation, federal officials need to increase investment in artificial intelligence and data analytics. Higher investment is likely to pay for itself many times over in economic and social benefits. 56

Promote digital education and workforce development

As AI applications accelerate across many sectors, it is vital that we reimagine our educational institutions for a world where AI will be ubiquitous and students need a different kind of training than they currently receive. Right now, many students do not receive instruction in the kinds of skills that will be needed in an AI-dominated landscape. For example, there currently are shortages of data scientists, computer scientists, engineers, coders, and platform developers. These are skills that are in short supply; unless our educational system generates more people with these capabilities, it will limit AI development.

For these reasons, both state and federal governments have been investing in AI human capital. For example, in 2017, the National Science Foundation funded over 6,500 graduate students in computer-related fields and has launched several new initiatives designed to encourage data and computer science at all levels from pre-K to higher and continuing education. 57 The goal is to build a larger pipeline of AI and data analytic personnel so that the United States can reap the full advantages of the knowledge revolution.

But there also needs to be substantial changes in the process of learning itself. It is not just technical skills that are needed in an AI world but skills of critical reasoning, collaboration, design, visual display of information, and independent thinking, among others. AI will reconfigure how society and the economy operate, and there needs to be “big picture” thinking on what this will mean for ethics, governance, and societal impact. People will need the ability to think broadly about many questions and integrate knowledge from a number of different areas.

One example of new ways to prepare students for a digital future is IBM’s Teacher Advisor program, utilizing Watson’s free online tools to help teachers bring the latest knowledge into the classroom. They enable instructors to develop new lesson plans in STEM and non-STEM fields, find relevant instructional videos, and help students get the most out of the classroom. 58 As such, they are precursors of new educational environments that need to be created.

Create a federal AI advisory committee

Federal officials need to think about how they deal with artificial intelligence. As noted previously, there are many issues ranging from the need for improved data access to addressing issues of bias and discrimination. It is vital that these and other concerns be considered so we gain the full benefits of this emerging technology.

In order to move forward in this area, several members of Congress have introduced the “Future of Artificial Intelligence Act,” a bill designed to establish broad policy and legal principles for AI. It proposes the secretary of commerce create a federal advisory committee on the development and implementation of artificial intelligence. The legislation provides a mechanism for the federal government to get advice on ways to promote a “climate of investment and innovation to ensure the global competitiveness of the United States,” “optimize the development of artificial intelligence to address the potential growth, restructuring, or other changes in the United States workforce,” “support the unbiased development and application of artificial intelligence,” and “protect the privacy rights of individuals.” 59

Among the specific questions the committee is asked to address include the following: competitiveness, workforce impact, education, ethics training, data sharing, international cooperation, accountability, machine learning bias, rural impact, government efficiency, investment climate, job impact, bias, and consumer impact. The committee is directed to submit a report to Congress and the administration 540 days after enactment regarding any legislative or administrative action needed on AI.

This legislation is a step in the right direction, although the field is moving so rapidly that we would recommend shortening the reporting timeline from 540 days to 180 days. Waiting nearly two years for a committee report will certainly result in missed opportunities and a lack of action on important issues. Given rapid advances in the field, having a much quicker turnaround time on the committee analysis would be quite beneficial.

Engage with state and local officials

States and localities also are taking action on AI. For example, the New York City Council unanimously passed a bill that directed the mayor to form a taskforce that would “monitor the fairness and validity of algorithms used by municipal agencies.” 60 The city employs algorithms to “determine if a lower bail will be assigned to an indigent defendant, where firehouses are established, student placement for public schools, assessing teacher performance, identifying Medicaid fraud and determine where crime will happen next.” 61

According to the legislation’s developers, city officials want to know how these algorithms work and make sure there is sufficient AI transparency and accountability. In addition, there is concern regarding the fairness and biases of AI algorithms, so the taskforce has been directed to analyze these issues and make recommendations regarding future usage. It is scheduled to report back to the mayor on a range of AI policy, legal, and regulatory issues by late 2019.

Some observers already are worrying that the taskforce won’t go far enough in holding algorithms accountable. For example, Julia Powles of Cornell Tech and New York University argues that the bill originally required companies to make the AI source code available to the public for inspection, and that there be simulations of its decisionmaking using actual data. After criticism of those provisions, however, former Councilman James Vacca dropped the requirements in favor of a task force studying these issues. He and other city officials were concerned that publication of proprietary information on algorithms would slow innovation and make it difficult to find AI vendors who would work with the city. 62 It remains to be seen how this local task force will balance issues of innovation, privacy, and transparency.

Regulate broad objectives more than specific algorithms

The European Union has taken a restrictive stance on these issues of data collection and analysis. 63 It has rules limiting the ability of companies from collecting data on road conditions and mapping street views. Because many of these countries worry that people’s personal information in unencrypted Wi-Fi networks are swept up in overall data collection, the EU has fined technology firms, demanded copies of data, and placed limits on the material collected. 64 This has made it more difficult for technology companies operating there to develop the high-definition maps required for autonomous vehicles.

The GDPR being implemented in Europe place severe restrictions on the use of artificial intelligence and machine learning. According to published guidelines, “Regulations prohibit any automated decision that ‘significantly affects’ EU citizens. This includes techniques that evaluates a person’s ‘performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.’” 65 In addition, these new rules give citizens the right to review how digital services made specific algorithmic choices affecting people.

By taking a restrictive stance on issues of data collection and analysis, the European Union is putting its manufacturers and software designers at a significant disadvantage to the rest of the world.

If interpreted stringently, these rules will make it difficult for European software designers (and American designers who work with European counterparts) to incorporate artificial intelligence and high-definition mapping in autonomous vehicles. Central to navigation in these cars and trucks is tracking location and movements. Without high-definition maps containing geo-coded data and the deep learning that makes use of this information, fully autonomous driving will stagnate in Europe. Through this and other data protection actions, the European Union is putting its manufacturers and software designers at a significant disadvantage to the rest of the world.

It makes more sense to think about the broad objectives desired in AI and enact policies that advance them, as opposed to governments trying to crack open the “black boxes” and see exactly how specific algorithms operate. Regulating individual algorithms will limit innovation and make it difficult for companies to make use of artificial intelligence.

Take biases seriously

Bias and discrimination are serious issues for AI. There already have been a number of cases of unfair treatment linked to historic data, and steps need to be undertaken to make sure that does not become prevalent in artificial intelligence. Existing statutes governing discrimination in the physical economy need to be extended to digital platforms. That will help protect consumers and build confidence in these systems as a whole.

For these advances to be widely adopted, more transparency is needed in how AI systems operate. Andrew Burt of Immuta argues, “The key problem confronting predictive analytics is really transparency. We’re in a world where data science operations are taking on increasingly important tasks, and the only thing holding them back is going to be how well the data scientists who train the models can explain what it is their models are doing.” 66

Maintaining mechanisms for human oversight and control

Some individuals have argued that there needs to be avenues for humans to exercise oversight and control of AI systems. For example, Allen Institute for Artificial Intelligence CEO Oren Etzioni argues there should be rules for regulating these systems. First, he says, AI must be governed by all the laws that already have been developed for human behavior, including regulations concerning “cyberbullying, stock manipulation or terrorist threats,” as well as “entrap[ping] people into committing crimes.” Second, he believes that these systems should disclose they are automated systems and not human beings. Third, he states, “An A.I. system cannot retain or disclose confidential information without explicit approval from the source of that information.” 67 His rationale is that these tools store so much data that people have to be cognizant of the privacy risks posed by AI.

In the same vein, the IEEE Global Initiative has ethical guidelines for AI and autonomous systems. Its experts suggest that these models be programmed with consideration for widely accepted human norms and rules for behavior. AI algorithms need to take into effect the importance of these norms, how norm conflict can be resolved, and ways these systems can be transparent about norm resolution. Software designs should be programmed for “nondeception” and “honesty,” according to ethics experts. When failures occur, there must be mitigation mechanisms to deal with the consequences. In particular, AI must be sensitive to problems such as bias, discrimination, and fairness. 68

A group of machine learning experts claim it is possible to automate ethical decisionmaking. Using the trolley problem as a moral dilemma, they ask the following question: If an autonomous car goes out of control, should it be programmed to kill its own passengers or the pedestrians who are crossing the street? They devised a “voting-based system” that asked 1.3 million people to assess alternative scenarios, summarized the overall choices, and applied the overall perspective of these individuals to a range of vehicular possibilities. That allowed them to automate ethical decisionmaking in AI algorithms, taking public preferences into account. 69 This procedure, of course, does not reduce the tragedy involved in any kind of fatality, such as seen in the Uber case, but it provides a mechanism to help AI developers incorporate ethical considerations in their planning.

Penalize malicious behavior and promote cybersecurity

As with any emerging technology, it is important to discourage malicious treatment designed to trick software or use it for undesirable ends. 70 This is especially important given the dual-use aspects of AI, where the same tool can be used for beneficial or malicious purposes. The malevolent use of AI exposes individuals and organizations to unnecessary risks and undermines the virtues of the emerging technology. This includes behaviors such as hacking, manipulating algorithms, compromising privacy and confidentiality, or stealing identities. Efforts to hijack AI in order to solicit confidential information should be seriously penalized as a way to deter such actions. 71

In a rapidly changing world with many entities having advanced computing capabilities, there needs to be serious attention devoted to cybersecurity. Countries have to be careful to safeguard their own systems and keep other nations from damaging their security. 72 According to the U.S. Department of Homeland Security, a major American bank receives around 11 million calls a week at its service center. In order to protect its telephony from denial of service attacks, it uses a “machine learning-based policy engine [that] blocks more than 120,000 calls per month based on voice firewall policies including harassing callers, robocalls and potential fraudulent calls.” 73 This represents a way in which machine learning can help defend technology systems from malevolent attacks.

To summarize, the world is on the cusp of revolutionizing many sectors through artificial intelligence and data analytics. There already are significant deployments in finance, national security, health care, criminal justice, transportation, and smart cities that have altered decisionmaking, business models, risk mitigation, and system performance. These developments are generating substantial economic and social benefits.

The world is on the cusp of revolutionizing many sectors through artificial intelligence, but the way AI systems are developed need to be better understood due to the major implications these technologies will have for society as a whole.

Yet the manner in which AI systems unfold has major implications for society as a whole. It matters how policy issues are addressed, ethical conflicts are reconciled, legal realities are resolved, and how much transparency is required in AI and data analytic solutions. 74 Human choices about software development affect the way in which decisions are made and the manner in which they are integrated into organizational routines. Exactly how these processes are executed need to be better understood because they will have substantial impact on the general public soon, and for the foreseeable future. AI may well be a revolution in human affairs, and become the single most influential human innovation in history.

Note: We appreciate the research assistance of Grace Gilberg, Jack Karsten, Hillary Schaub, and Kristjan Tomasson on this project.

The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.

Support for this publication was generously provided by Amazon. Brookings recognizes that the value it provides is in its absolute commitment to quality, independence, and impact. Activities supported by its donors reflect this commitment. 

John R. Allen is a member of the Board of Advisors of Amida Technology and on the Board of Directors of Spark Cognition. Both companies work in fields discussed in this piece.

  • Thomas Davenport, Jeff Loucks, and David Schatsky, “Bullish on the Business Value of Cognitive” (Deloitte, 2017), p. 3 (www2.deloitte.com/us/en/pages/deloitte-analytics/articles/cognitive-technology-adoption-survey.html).
  • Luke Dormehl, Thinking Machines: The Quest for Artificial Intelligence—and Where It’s Taking Us Next (New York: Penguin–TarcherPerigee, 2017).
  • Shubhendu and Vijay, “Applicability of Artificial Intelligence in Different Fields of Life.”
  • Andrew McAfee and Erik Brynjolfsson, Machine Platform Crowd: Harnessing Our Digital Future (New York: Norton, 2017).
  • Portions of this paper draw on Darrell M. West, The Future of Work: Robots, AI, and Automation , Brookings Institution Press, 2018.
  • PriceWaterhouseCoopers, “Sizing the Prize: What’s the Real Value of AI for Your Business and How Can You Capitalise?” 2017.
  • Dominic Barton, Jonathan Woetzel, Jeongmin Seong, and Qinzheng Tian, “Artificial Intelligence: Implications for China” (New York: McKinsey Global Institute, April 2017), p. 1.
  • Nathaniel Popper, “Stocks and Bots,” New York Times Magazine , February 28, 2016.
  • Michael Lewis, Flash Boys: A Wall Street Revolt (New York: Norton, 2015).
  • Cade Metz, “In Quantum Computing Race, Yale Professors Battle Tech Giants,” New York Times , November 14, 2017, p. B3.
  • Executive Office of the President, “Artificial Intelligence, Automation, and the Economy,” December 2016, pp. 27-28.
  • Christian Davenport, “Future Wars May Depend as Much on Algorithms as on Ammunition, Report Says,” Washington Post , December 3, 2017.
  • John R. Allen and Amir Husain, “On Hyperwar,” Naval Institute Proceedings , July 17, 2017, pp. 30-36.
  • Paul Mozur, “China Sets Goal to Lead in Artificial Intelligence,” New York Times , July 21, 2017, p. B1.
  • Paul Mozur and John Markoff, “Is China Outsmarting American Artificial Intelligence?” New York Times , May 28, 2017.
  • Economist , “America v China: The Battle for Digital Supremacy,” March 15, 2018.
  • Rasmus Rothe, “Applying Deep Learning to Real-World Problems,” Medium , May 23, 2017.
  • Eric Horvitz, “Reflections on the Status and Future of Artificial Intelligence,” Testimony before the U.S. Senate Subcommittee on Space, Science, and Competitiveness, November 30, 2016, p. 5.
  • Jeff Asher and Rob Arthur, “Inside the Algorithm That Tries to Predict Gun Violence in Chicago,” New York Times Upshot , June 13, 2017.
  • Caleb Watney, “It’s Time for our Justice System to Embrace Artificial Intelligence,” TechTank (blog), Brookings Institution, July 20, 2017.
  • Asher and Arthur, “Inside the Algorithm That Tries to Predict Gun Violence in Chicago.”
  • Paul Mozur and Keith Bradsher, “China’s A.I. Advances Help Its Tech Industry, and State Security,” New York Times , December 3, 2017.
  • Simon Denyer, “China’s Watchful Eye,” Washington Post , January 7, 2018.
  • Cameron Kerry and Jack Karsten, “Gauging Investment in Self-Driving Cars,” Brookings Institution, October 16, 2017.
  • Portions of this section are drawn from Darrell M. West, “Driverless Cars in China, Europe, Japan, Korea, and the United States,” Brookings Institution, September 2016.
  • Yuming Ge, Xiaoman Liu, Libo Tang, and Darrell M. West, “Smart Transportation in China and the United States,” Center for Technology Innovation, Brookings Institution, December 2017.
  • Peter Holley, “Uber Signs Deal to Buy 24,000 Autonomous Vehicles from Volvo,” Washington Post , November 20, 2017.
  • Daisuke Wakabayashi, “Self-Driving Uber Car Kills Pedestrian in Arizona, Where Robots Roam,” New York Times , March 19, 2018.
  • Kevin Desouza, Rashmi Krishnamurthy, and Gregory Dawson, “Learning from Public Sector Experimentation with Artificial Intelligence,” TechTank (blog), Brookings Institution, June 23, 2017.
  • Boyd Cohen, “The 10 Smartest Cities in North America,” Fast Company , November 14, 2013.
  • Teena Maddox, “66% of US Cities Are Investing in Smart City Technology,” TechRepublic , November 6, 2017.
  • Osonde Osoba and William Welser IV, “The Risks of Artificial Intelligence to Security and the Future of Work” (Santa Monica, Calif.: RAND Corp., December 2017) (www.rand.org/pubs/perspectives/PE237.html).
  • Ibid., p. 7.
  • Dominic Barton, Jonathan Woetzel, Jeongmin Seong, and Qinzheng Tian, “Artificial Intelligence: Implications for China” (New York: McKinsey Global Institute, April 2017), p. 7.
  • Executive Office of the President, “Preparing for the Future of Artificial Intelligence,” October 2016, pp. 30-31.
  • Elaine Glusac, “As Airbnb Grows, So Do Claims of Discrimination,” New York Times , June 21, 2016.
  • “Joy Buolamwini,” Bloomberg Businessweek , July 3, 2017, p. 80.
  • Mark Purdy and Paul Daugherty, “Why Artificial Intelligence is the Future of Growth,” Accenture, 2016.
  • Jon Valant, “Integrating Charter Schools and Choice-Based Education Systems,” Brown Center Chalkboard blog, Brookings Institution, June 23, 2017.
  • Tucker, “‘A White Mask Worked Better.’”
  • Cliff Kuang, “Can A.I. Be Taught to Explain Itself?” New York Times Magazine , November 21, 2017.
  • Yale Law School Information Society Project, “Governing Machine Learning,” September 2017.
  • Katie Benner, “Airbnb Vows to Fight Racism, But Its Users Can’t Sue to Prompt Fairness,” New York Times , June 19, 2016.
  • Executive Office of the President, “Artificial Intelligence, Automation, and the Economy” and “Preparing for the Future of Artificial Intelligence.”
  • Nancy Scolar, “Facebook’s Next Project: American Inequality,” Politico , February 19, 2018.
  • Darrell M. West, “What Internet Search Data Reveals about Donald Trump’s First Year in Office,” Brookings Institution policy report, January 17, 2018.
  • Ian Buck, “Testimony before the House Committee on Oversight and Government Reform Subcommittee on Information Technology,” February 14, 2018.
  • Keith Nakasone, “Testimony before the House Committee on Oversight and Government Reform Subcommittee on Information Technology,” March 7, 2018.
  • Greg Brockman, “The Dawn of Artificial Intelligence,” Testimony before U.S. Senate Subcommittee on Space, Science, and Competitiveness, November 30, 2016.
  • Amir Khosrowshahi, “Testimony before the House Committee on Oversight and Government Reform Subcommittee on Information Technology,” February 14, 2018.
  • James Kurose, “Testimony before the House Committee on Oversight and Government Reform Subcommittee on Information Technology,” March 7, 2018.
  • Stephen Noonoo, “Teachers Can Now Use IBM’s Watson to Search for Free Lesson Plans,” EdSurge , September 13, 2017.
  • Congress.gov, “H.R. 4625 FUTURE of Artificial Intelligence Act of 2017,” December 12, 2017.
  • Elizabeth Zima, “Could New York City’s AI Transparency Bill Be a Model for the Country?” Government Technology , January 4, 2018.
  • Julia Powles, “New York City’s Bold, Flawed Attempt to Make Algorithms Accountable,” New Yorker , December 20, 2017.
  • Sheera Frenkel, “Tech Giants Brace for Europe’s New Data Privacy Rules,” New York Times , January 28, 2018.
  • Claire Miller and Kevin O’Brien, “Germany’s Complicated Relationship with Google Street View,” New York Times , April 23, 2013.
  • Cade Metz, “Artificial Intelligence is Setting Up the Internet for a Huge Clash with Europe,” Wired , July 11, 2016.
  • Eric Siegel, “Predictive Analytics Interview Series: Andrew Burt,” Predictive Analytics Times , June 14, 2017.
  • Oren Etzioni, “How to Regulate Artificial Intelligence,” New York Times , September 1, 2017.
  • “Ethical Considerations in Artificial Intelligence and Autonomous Systems,” unpublished paper. IEEE Global Initiative, 2018.
  • Ritesh Noothigattu, Snehalkumar Gaikwad, Edmond Awad, Sohan Dsouza, Iyad Rahwan, Pradeep Ravikumar, and Ariel Procaccia, “A Voting-Based System for Ethical Decision Making,” Computers and Society , September 20, 2017 (www.media.mit.edu/publications/a-voting-based-system-for-ethical-decision-making/).
  • Miles Brundage, et al., “The Malicious Use of Artificial Intelligence,” University of Oxford unpublished paper, February 2018.
  • John Markoff, “As Artificial Intelligence Evolves, So Does Its Criminal Potential,” New York Times, October 24, 2016, p. B3.
  • Economist , “The Challenger: Technopolitics,” March 17, 2018.
  • Douglas Maughan, “Testimony before the House Committee on Oversight and Government Reform Subcommittee on Information Technology,” March 7, 2018.
  • Levi Tillemann and Colin McCormick, “Roadmapping a U.S.-German Agenda for Artificial Intelligence Policy,” New American Foundation, March 2017.

Artificial Intelligence

Governance Studies

Center for Technology Innovation

Artificial Intelligence and Emerging Technology Initiative

Mayu Takeuchi, Joseph Parilla

June 17, 2024

Nicol Turner Lee, Arjun Subramonian, Raj Korpan

Tom Wheeler

June 7, 2024

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

THE IMPACT OF TECHNOLOGY ON HIGHER EDUCATION IN THE 21 st CENTURY: A SYSTEMATIC LITERATURE REVIEW

Profile image of Dr. Ahmad Shekib Popal

2024, GAP iNTERDISCIPLINARITIES A Global Journal of Interdisciplinary Studies, (ISSN - 2581-5628 ) Impact Factor: SJIF - 5.363, IIFS - 4.875

In the ever-evolving landscape of 21st-century higher education, this article delves into the transformative role technology plays in reshaping how we acquire, disseminate, and apply knowledge. From the traditional chalkboards to interactive screens, the evolution has been revolutionary, woven into the fabric of our daily lives. The exploration draws on scholarly sources, navigating through digital tools, platforms, and strategies, from classrooms to online environments, and from augmented reality to artificial intelligence. The literature review assesses the remarkable transformation catalyzed by digital technologies, examining themes such as digital natives, blended learning, immersive technologies, adaptive learning, and data analytics. It uncovers both opportunities and challenges, addressing issues of equity and ethical considerations. The research questions focus on technology's impact on student engagement, learning outcomes, and equitable access. Objectives include elevating student digital literacy and enhancing teacher proficiency in online pedagogy. The methodology combines a comprehensive literature review with practical interventions and data analysis. The article concludes by emphasizing the dynamic nature of technology in education, acknowledging challenges, and calling for ongoing research and critical evaluation to shape the future of learning.

Related Papers

South African Computer Journal

Reuben Dlamini

essay on technology in 21st century

Advances in Higher Education and Professional Development

Sheri Conklin

The current status of today's society is driven by and involves technology. Many people cannot function without their cell-phones, social media, gadgets, tablets, and other forms of technology for which people interact. Many of these technologies depend upon and are utilized within an online context. However, as it pertains to online learning environments, many faculty struggle with developing and implementing opportunities that builds a sense of community for their learners. This chapter: 1) Discusses key factors that impact student engagement, 2) Addresses factors that facilitate continued engagement for diverse online learners, 3) Provides evidence-based practices for creating and sustaining online learner engagement, and 4) Offers real world suggestions from the online teaching experience of chapter's authors.

International Journal of Creative Research Thoughts (IJCRT)

Dr. Anamika Ahirwar , Mahendra Singh Panwar

Digital technology has become an indispensable component of modern education, revolutionizing the learning process in profound ways. This research paper provides an in-depth examination of the multifaceted role of digital technology in shaping contemporary learning environments. Through an extensive review of existing literature, this paper explores the impact of digital technology on student engagement, pedagogical practices, educational outcomes, and the overall learning experience. Additionally, it addresses the challenges and opportunities associated with the integration of digital technology in education, including issues such as access, equity, privacy, and security. By synthesizing current research findings and best practices, this paper aims to provide valuable insights into how digital technology can be effectively leveraged to enhance teaching and learning in the digital age.

Abdullah Saykili

The dominant roles that digital connective technologies have in the 21st century are causing profound changes in all domains of life, which signal that we have reached a new age: the digital age. Education is one of the fundamental domains of life re-engineered to adopt to the changing landscape of what it means to function in this new age. The school paradigm which rests on the conditions and requirements of the industrial age appears to fall short in terms of meeting the needs and demands of the 21st century learner. The emerging digital connective technologies and the educational innovations they triggered such as open educational resources (OER), massive online open courses (MOOCs) and learning analytics are disrupting the learning processes and structures of the industrial age such that it is now an imperative to develop a new educational paradigm. These new innovations enable learners to extend learning outside the boundaries of traditional learning institutions through informal and enriched learning experiences using online communities on new platforms such as social media and other social platforms. The digital innovations aforementioned also free the learners from the shackles of time so that learners can, not only access but also create knowledge through social interaction and collaboration. The age we live in is ripe for unprecedented fundamental changes and opportunities for higher education (HE). Therefore, policymakers involved in education need to rethink the implications of digital connective technologies, the challenges and opportunities they bring to the educational scene while developing value-added policies regarding HE. This paper addresses the learner, instructor, learning environments and the administration dimensions of HE and how the digital connective technologies are impacting on these dimensions in the digital age. The paper also offers, as a conclusion, a road map for HE to better function in this age.

Library Hi Tech News

Jutta Treviranus

International Journal of Educational Technology in Higher Education

Melissa Bond

Digital technology has become a central aspect of higher education, inherently affecting all aspects of the student experience. It has also been linked to an increase in behavioural, affective and cognitive student engagement, the facilitation of which is a central concern of educators. In order to delineate the complex nexus of technology and student engagement, this article systematically maps research from 243 studies published between 2007 and 2016. Research within the corpus was predominantly undertaken within the United States and the United Kingdom, with only limited research undertaken in the Global South, and largely focused on the fields of Arts & Humanities, Education, and Natural Sciences, Mathematics & Statistics. Studies most often used quantitative methods, followed by mixed methods, with little qualitative research methods employed. Few studies provided a definition of student engagement, and less than half were guided by a theoretical framework. The courses investigated used blended learning and text-based tools (e.g. discussion forums) most often, with undergraduate students as the primary target group. Stemming from the use of educational technology, behavioural engagement was by far the most often identified dimension, followed by affective and cognitive engagement. This mapping article provides the grounds for further exploration into discipline-specific use of technology to foster student engagement.

Educational Review

Gordon Mikoski

Nota Bene 2014: 20th Anniversary Phi Theta Kappa Honor Society Anthology

Lisa Haygood

The change of political party platforms ushered in with the democratic victory of Barak Obama in 2008 resulted in a distinct shift in public educational efforts from the “No Child Left Behind” standardization championed by the George W. Bush White House; refocusing attention on the American post secondary education system and underscoring the common core belief that a college education should and would be the goal of every graduating high school senior. Online courses will continue to augment traditional curriculum offerings and provide more students with the flexibility to begin, enhance and/or complete their degree. As with countless industries before it, post secondary education will and is being transformed by technology – in and out of the traditional classroom. It is critical that lawmakers, public and private institutions, educators, private corporations and entrepreneurs embrace the IT revolution that higher education is already immersed in and strive to maintain the affordability of these courses through cooperative authorship and deliverance. Early stumbles and hiccups have long-since given way to a viable, affordable, and statistically successful adjunct to higher education in America and internationally. The US must maintain its leading role in the quest for refined online education standards of development and delivery in order to provide the opportunity of a quality education and a path to fulfilling the American Dream.

Studia paedagogica

Eliana Esther Gallardo Echenique

Sue Watling

Loading Preview

Sorry, preview is currently unavailable. You can download the paper by clicking the button above.

RELATED PAPERS

International Journal of Linguistics, Literature and Translation

farida mokhtari

The Promise of Higher Education

Carles Sigalés

Innovative Practice in Higher Education

Carmel Thomason

Scientific Research Publishing

Anna Bedford

Journal of King Saud University - Computer and Information Sciences

Demetrios Sampson

Sue Renes , Anthony Strange

Dr Russell Butson

Arianne Rourke , kathryn coleman

Contemporary Issues in Technology and Teacher Education

International Journal of Science, Mathematics and Technology Learning

Solomon Nsor-Anabiah , Ruhiya Abubakar , Owusu Antwi

Crina Damsa

Adriana Dana D P Listes Pop

Journal of Information Technology Education: Innovations in Practice

Crystal Fulton

Bani Koumachi

Developments in Business Simulation and Experiential Learning

Australasian Journal of Educational Technology

Digital Education Review

Carlinda Leite

Innovate Journal of Online Education

Susana Juniu

Advances in educational technologies and instructional design book series

Mike Brayshaw

Mark Anderson

International Journal of English Language and Translation Studies

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

Talk to our experts

1800-120-456-456

  • Technology Essay

ffImage

Essay on Technology

The word "technology" and its uses have immensely changed since the 20th century, and with time, it has continued to evolve ever since. We are living in a world driven by technology. The advancement of technology has played an important role in the development of human civilization, along with cultural changes. Technology provides innovative ways of doing work through various smart and innovative means. 

Electronic appliances, gadgets, faster modes of communication, and transport have added to the comfort factor in our lives. It has helped in improving the productivity of individuals and different business enterprises. Technology has brought a revolution in many operational fields. It has undoubtedly made a very important contribution to the progress that mankind has made over the years.

The Advancement of Technology:

Technology has reduced the effort and time and increased the efficiency of the production requirements in every field. It has made our lives easy, comfortable, healthy, and enjoyable. It has brought a revolution in transport and communication. The advancement of technology, along with science, has helped us to become self-reliant in all spheres of life. With the innovation of a particular technology, it becomes part of society and integral to human lives after a point in time.

Technology is Our Part of Life:

Technology has changed our day-to-day lives. Technology has brought the world closer and better connected. Those days have passed when only the rich could afford such luxuries. Because of the rise of globalisation and liberalisation, all luxuries are now within the reach of the average person. Today, an average middle-class family can afford a mobile phone, a television, a washing machine, a refrigerator, a computer, the Internet, etc. At the touch of a switch, a man can witness any event that is happening in far-off places.  

Benefits of Technology in All Fields: 

We cannot escape technology; it has improved the quality of life and brought about revolutions in various fields of modern-day society, be it communication, transportation, education, healthcare, and many more. Let us learn about it.

Technology in Communication:

With the advent of technology in communication, which includes telephones, fax machines, cellular phones, the Internet, multimedia, and email, communication has become much faster and easier. It has transformed and influenced relationships in many ways. We no longer need to rely on sending physical letters and waiting for several days for a response. Technology has made communication so simple that you can connect with anyone from anywhere by calling them via mobile phone or messaging them using different messaging apps that are easy to download.

Innovation in communication technology has had an immense influence on social life. Human socialising has become easier by using social networking sites, dating, and even matrimonial services available on mobile applications and websites.

Today, the Internet is used for shopping, paying utility bills, credit card bills, admission fees, e-commerce, and online banking. In the world of marketing, many companies are marketing and selling their products and creating brands over the internet. 

In the field of travel, cities, towns, states, and countries are using the web to post detailed tourist and event information. Travellers across the globe can easily find information on tourism, sightseeing, places to stay, weather, maps, timings for events, transportation schedules, and buy tickets to various tourist spots and destinations.

Technology in the Office or Workplace:

Technology has increased efficiency and flexibility in the workspace. Technology has made it easy to work remotely, which has increased the productivity of the employees. External and internal communication has become faster through emails and apps. Automation has saved time, and there is also a reduction in redundancy in tasks. Robots are now being used to manufacture products that consistently deliver the same product without defect until the robot itself fails. Artificial Intelligence and Machine Learning technology are innovations that are being deployed across industries to reap benefits.

Technology has wiped out the manual way of storing files. Now files are stored in the cloud, which can be accessed at any time and from anywhere. With technology, companies can make quick decisions, act faster towards solutions, and remain adaptable. Technology has optimised the usage of resources and connected businesses worldwide. For example, if the customer is based in America, he can have the services delivered from India. They can communicate with each other in an instant. Every company uses business technology like virtual meeting tools, corporate social networks, tablets, and smart customer relationship management applications that accelerate the fast movement of data and information.

Technology in Education:

Technology is making the education industry improve over time. With technology, students and parents have a variety of learning tools at their fingertips. Teachers can coordinate with classrooms across the world and share their ideas and resources online. Students can get immediate access to an abundance of good information on the Internet. Teachers and students can access plenty of resources available on the web and utilise them for their project work, research, etc. Online learning has changed our perception of education. 

The COVID-19 pandemic brought a paradigm shift using technology where school-going kids continued their studies from home and schools facilitated imparting education by their teachers online from home. Students have learned and used 21st-century skills and tools, like virtual classrooms, AR (Augmented Reality), robots, etc. All these have increased communication and collaboration significantly. 

Technology in Banking:

Technology and banking are now inseparable. Technology has boosted digital transformation in how the banking industry works and has vastly improved banking services for their customers across the globe.

Technology has made banking operations very sophisticated and has reduced errors to almost nil, which were somewhat prevalent with manual human activities. Banks are adopting Artificial Intelligence (AI) to increase their efficiency and profits. With the emergence of Internet banking, self-service tools have replaced the traditional methods of banking. 

You can now access your money, handle transactions like paying bills, money transfers, and online purchases from merchants, and monitor your bank statements anytime and from anywhere in the world. Technology has made banking more secure and safe. You do not need to carry cash in your pocket or wallet; the payments can be made digitally using e-wallets. Mobile banking, banking apps, and cybersecurity are changing the face of the banking industry.

Manufacturing and Production Industry Automation:

At present, manufacturing industries are using all the latest technologies, ranging from big data analytics to artificial intelligence. Big data, ARVR (Augmented Reality and Virtual Reality), and IoT (Internet of Things) are the biggest manufacturing industry players. Automation has increased the level of productivity in various fields. It has reduced labour costs, increased efficiency, and reduced the cost of production.

For example, 3D printing is used to design and develop prototypes in the automobile industry. Repetitive work is being done easily with the help of robots without any waste of time. This has also reduced the cost of the products. 

Technology in the Healthcare Industry:

Technological advancements in the healthcare industry have not only improved our personal quality of life and longevity; they have also improved the lives of many medical professionals and students who are training to become medical experts. It has allowed much faster access to the medical records of each patient. 

The Internet has drastically transformed patients' and doctors’ relationships. Everyone can stay up to date on the latest medical discoveries, share treatment information, and offer one another support when dealing with medical issues. Modern technology has allowed us to contact doctors from the comfort of our homes. There are many sites and apps through which we can contact doctors and get medical help. 

Breakthrough innovations in surgery, artificial organs, brain implants, and networked sensors are examples of transformative developments in the healthcare industry. Hospitals use different tools and applications to perform their administrative tasks, using digital marketing to promote their services.

Technology in Agriculture:

Today, farmers work very differently than they would have decades ago. Data analytics and robotics have built a productive food system. Digital innovations are being used for plant breeding and harvesting equipment. Software and mobile devices are helping farmers harvest better. With various data and information available to farmers, they can make better-informed decisions, for example, tracking the amount of carbon stored in soil and helping with climate change.

Disadvantages of Technology:

People have become dependent on various gadgets and machines, resulting in a lack of physical activity and tempting people to lead an increasingly sedentary lifestyle. Even though technology has increased the productivity of individuals, organisations, and the nation, it has not increased the efficiency of machines. Machines cannot plan and think beyond the instructions that are fed into their system. Technology alone is not enough for progress and prosperity. Management is required, and management is a human act. Technology is largely dependent on human intervention. 

Computers and smartphones have led to an increase in social isolation. Young children are spending more time surfing the internet, playing games, and ignoring their real lives. Usage of technology is also resulting in job losses and distracting students from learning. Technology has been a reason for the production of weapons of destruction.

Dependency on technology is also increasing privacy concerns and cyber crimes, giving way to hackers.

arrow-right

FAQs on Technology Essay

1. What is technology?

Technology refers to innovative ways of doing work through various smart means. The advancement of technology has played an important role in the development of human civilization. It has helped in improving the productivity of individuals and businesses.

2. How has technology changed the face of banking?

Technology has made banking operations very sophisticated. With the emergence of Internet banking, self-service tools have replaced the traditional methods of banking. You can now access your money, handle transactions, and monitor your bank statements anytime and from anywhere in the world. Technology has made banking more secure and safe.

3. How has technology brought a revolution in the medical field?

Patients and doctors keep each other up to date on the most recent medical discoveries, share treatment information, and offer each other support when dealing with medical issues. It has allowed much faster access to the medical records of each patient. Modern technology has allowed us to contact doctors from the comfort of our homes. There are many websites and mobile apps through which we can contact doctors and get medical help.

4. Are we dependent on technology?

Yes, today, we are becoming increasingly dependent on technology. Computers, smartphones, and modern technology have helped humanity achieve success and progress. However, in hindsight, people need to continuously build a healthy lifestyle, sorting out personal problems that arise due to technological advancements in different aspects of human life.

Encyclopedia Britannica

  • Games & Quizzes
  • History & Society
  • Science & Tech
  • Biographies
  • Animals & Nature
  • Geography & Travel
  • Arts & Culture
  • On This Day
  • One Good Fact
  • New Articles
  • Lifestyles & Social Issues
  • Philosophy & Religion
  • Politics, Law & Government
  • World History
  • Health & Medicine
  • Browse Biographies
  • Birds, Reptiles & Other Vertebrates
  • Bugs, Mollusks & Other Invertebrates
  • Environment
  • Fossils & Geologic Time
  • Entertainment & Pop Culture
  • Sports & Recreation
  • Visual Arts
  • Demystified
  • Image Galleries
  • Infographics
  • Top Questions
  • Britannica Kids
  • Saving Earth
  • Space Next 50
  • Student Center
  • Introduction
  • Social involvement in technological advances
  • Modes of technological transmission
  • Earliest communities
  • Tools and weapons
  • Building techniques
  • Manufacturing
  • Craftsmen and scientists
  • Copper and bronze
  • Urban manufacturing
  • Transmitting knowledge
  • The mastery of iron
  • Mechanical contrivances
  • Agriculture
  • Military technology
  • Power sources
  • Agriculture and crafts
  • Architecture
  • Communications
  • The Renaissance
  • The steam engine
  • Metallurgy and mining
  • New commodities
  • Land reclamation
  • Military fortifications
  • Transport and communications
  • Steam engines

Electricity

  • Internal-combustion engine
  • Iron and steel
  • Low-grade ores
  • Mechanical engineering
  • Civil engineering
  • Steam locomotive
  • Road locomotive
  • Steamboats and ships
  • Printing and photography
  • Telegraphs and telephones

Gas-turbine engine

  • Atomic power
  • Improvements in iron and steel
  • Building materials
  • Synthetic fibres
  • Synthetic rubber
  • Pharmaceuticals and medical technology
  • Food and agriculture
  • Transportation
  • Alternatives to fossil fuels
  • Gas turbine
  • Automation and the computer
  • Food production
  • Space exploration
  • Science and technology
  • Criticisms of technology
  • Nuclear technology
  • Population explosion
  • Ecological balance
  • Interactions between society and technology
  • The putative autonomy of technology
  • Technology and education
  • The quality of life

International Space Station

  • Is Internet technology "making us stupid"?
  • What is the impact of artificial intelligence (AI) technology on society?
  • Where and when did the Industrial Revolution take place?
  • How did the Industrial Revolution change economies?
  • How did the Industrial Revolution change society?

Abstract vector hi speed internet technology background

The 20th and 21st centuries

Our editors will review what you’ve submitted and determine whether to revise the article.

  • Frontiers - The Evolution of Technology and Physical Inactivity: The Good, the Bad, and the Way Forward
  • San José State University - Introduction to the History of Technology
  • Table Of Contents

Technology from 1900 to 1945

Recent history is notoriously difficult to write, because of the mass of material and the problem of distinguishing the significant from the insignificant among events that have virtually the power of contemporary experience. In respect to the recent history of technology , however, one fact stands out clearly: despite the immense achievements of technology by 1900, the following decades witnessed more advance over a wide range of activities than the whole of previously recorded history. The airplane, the rocket and interplanetary probes, electronics, atomic power , antibiotics, insecticides, and a host of new materials have all been invented and developed to create an unparalleled social situation, full of possibilities and dangers, which would have been virtually unimaginable before the present century.

In venturing to interpret the events of the 20th century, it will be convenient to separate the years before 1945 from those that followed. The years 1900 to 1945 were dominated by the two World Wars, while those since 1945 were preoccupied by the need to avoid another major war. The dividing point is one of outstanding social and technological significance: the detonation of the first atomic bomb at Alamogordo, New Mexico , in July 1945.

There were profound political changes in the 20th century related to technological capacity and leadership. It may be an exaggeration to regard the 20th century as “the American century,” but the rise of the United States as a superstate was sufficiently rapid and dramatic to excuse the hyperbole . It was a rise based upon tremendous natural resources exploited to secure increased productivity through widespread industrialization, and the success of the United States in achieving this objective was tested and demonstrated in the two World Wars. Technological leadership passed from Britain and the European nations to the United States in the course of these wars. This is not to say that the springs of innovation went dry in Europe. Many important inventions of the 20th century originated there. But it was the United States that had the capacity to assimilate innovations and take full advantage from them at times when other countries were deficient in one or other of the vital social resources without which a brilliant invention cannot be converted into a commercial success. As with Britain in the Industrial Revolution , the technological vitality of the United States in the 20th century was demonstrated less by any particular innovations than by its ability to adopt new ideas from whatever source they come.

The two World Wars were themselves the most important instruments of technological as well as political change in the 20th century. The rapid evolution of the airplane is a striking illustration of this process, while the appearance of the tank in the first conflict and of the atomic bomb in the second show the same signs of response to an urgent military stimulus. It has been said that World War I was a chemists’ war, on the basis of the immense importance of high explosives and poison gas. In other respects the two wars hastened the development of technology by extending the institutional apparatus for the encouragement of innovation by both the state and private industry . This process went further in some countries than in others, but no major belligerent nation could resist entirely the need to support and coordinate its scientific-technological effort. The wars were thus responsible for speeding the transformation from “little science,” with research still largely restricted to small-scale efforts by a few isolated scientists, to “big science,” with the emphasis on large research teams sponsored by governments and corporations, working collectively on the development and application of new techniques. While the extent of this transformation must not be overstated, and recent research has tended to stress the continuing need for the independent inventor at least in the stimulation of innovation, there can be little doubt that the change in the scale of technological enterprises had far-reaching consequences . It was one of the most momentous transformations of the 20th century, for it altered the quality of industrial and social organization. In the process it assured technology, for the first time in its long history, a position of importance and even honour in social esteem.

Fuel and power

There were no fundamental innovations in fuel and power before the breakthrough of 1945, but there were several significant developments in techniques that had originated in the previous century. An outstanding development of this type was the internal-combustion engine , which was continuously improved to meet the needs of road vehicles and airplanes. The high-compression engine burning heavy-oil fuels, invented by Rudolf Diesel in the 1890s, was developed to serve as a submarine power unit in World War I and was subsequently adapted to heavy road haulage duties and to agricultural tractors. Moreover, the sort of development that had transformed the reciprocating steam engine into the steam turbine occurred with the internal-combustion engine, the gas turbine replacing the reciprocating engine for specialized purposes such as aero-engines, in which a high power-to-weight ratio is important. Admittedly, this adaptation had not proceeded very far by 1945, although the first jet-powered aircraft were in service by the end of the war. The theory of the gas turbine, however, had been understood since the 1920s at least, and in 1929 Sir Frank Whittle , then taking a flying instructor’s course with the Royal Air Force , combined it with the principle of jet propulsion in the engine for which he took out a patent in the following year. But the construction of a satisfactory gas-turbine engine was delayed for a decade by the lack of resources, and particularly by the need to develop new metal alloys that could withstand the high temperatures generated in the engine. This problem was solved by the development of a nickel-chromium alloy, and, with the gradual solution of the other problems, work went on in both Germany and Britain to seize a military advantage by applying the jet engine to combat aircraft.

The principle of the gas turbine is that of compressing and burning air and fuel in a combustion chamber and using the exhaust jet from this process to provide the reaction that propels the engine forward. In its turbopropeller form, which developed only after World War II , the exhaust drives a shaft carrying a normal airscrew (propeller). Compression is achieved in a gas-turbine engine by admitting air through a turbine rotor. In the so-called ramjet engine, intended to operate at high speeds, the momentum of the engine through the air achieves adequate compression. The gas turbine has been the subject of experiments in road, rail, and marine transport, but for all purposes except that of air transport its advantages have not so far been such as to make it a viable rival to traditional reciprocating engines.

essay on technology in 21st century

As far as fuel is concerned, the gas turbine burns mainly the middle fractions (kerosene, or paraffin) of refined oil, but the general tendency of its widespread application was to increase still further the dependence of the industrialized nations on the producers of crude oil , which became a raw material of immense economic value and international political significance. The refining of this material itself underwent important technological development. Until the 20th century it consisted of a fairly simple batch process whereby oil was heated until it vaporized, when the various fractions were distilled separately. Apart from improvements in the design of the stills and the introduction of continuous-flow production, the first big advance came in 1913 with the introduction of thermal cracking . This process took the less volatile fractions after distillation and subjected them to heat under pressure, thus cracking the heavy molecules into lighter molecules and so increasing the yield of the most valuable fuel, petrol or gasoline. The discovery of this ability to tailor the products of crude oil to suit the market marks the true beginning of the petrochemical industry. It received a further boost in 1936, with the introduction of catalytic cracking. By the use of various catalysts in the process, means were devised for still further manipulating the molecules of the hydrocarbon raw material. The development of modern plastics followed directly on this ( see below Plastics ). So efficient had the processes of utilization become that by the end of World War II the petrochemical industry had virtually eliminated all waste materials.

All the principles of generating electricity had been worked out in the 19th century, but by its end these had only just begun to produce electricity on a large scale. The 20th century witnessed a colossal expansion of electrical power generation and distribution. The general pattern has been toward ever-larger units of production, using steam from coal- or oil-fired boilers. Economies of scale and the greater physical efficiency achieved as higher steam temperatures and pressures were attained both reinforced this tendency. Experience in the United States indicates the trend: in the first decade of the 20th century, a generating unit with a capacity of 25,000 kilowatts with pressures up to 200–300 pounds per square inch at 400–500 °F (about 200–265 °C) was considered large, but by 1930 the largest unit was 208,000 kilowatts with pressures of 1,200 pounds per square inch at a temperature of 725 °F, while the amount of fuel necessary to produce a kilowatt-hour of electricity and the price to the consumer had fallen dramatically. As the market for electricity increased, so did the distance over which it was transmitted, and the efficiency of transmission required higher and higher voltages. The small direct-current generators of early urban power systems were abandoned in favour of alternating-current systems, which could be adapted more readily to high voltages. Transmission over a line of 155 miles (250 km) was established in California in 1908 at 110,000 volts, and Hoover Dam in the 1930s used a line of 300 miles (480 km) at 287,000 volts. The latter case may serve as a reminder that hydroelectric power , using a fall of water to drive water turbines, was developed to generate electricity where the climate and topography make it possible to combine production with convenient transmission to a market. Remarkable levels of efficiency were achieved in modern plants. One important consequence of the ever-expanding consumption of electricity in the industrialized countries has been the linking of local systems to provide vast power grids, or pools, within which power can be shifted easily to meet changing local needs for current.

Information technologies of 21st century and their impact on the society

  • Original Research
  • Published: 16 August 2019
  • Volume 11 , pages 759–766, ( 2019 )

Cite this article

essay on technology in 21st century

  • Mohammad Yamin   ORCID: orcid.org/0000-0002-3778-3366 1  

39k Accesses

20 Citations

Explore all metrics

Twenty first century has witnessed emergence of some ground breaking information technologies that have revolutionised our way of life. The revolution began late in 20th century with the arrival of internet in 1995, which has given rise to methods, tools and gadgets having astonishing applications in all academic disciplines and business sectors. In this article we shall provide a design of a ‘spider robot’ which may be used for efficient cleaning of deadly viruses. In addition, we shall examine some of the emerging technologies which are causing remarkable breakthroughs and improvements which were inconceivable earlier. In particular we shall look at the technologies and tools associated with the Internet of Things (IoT), Blockchain, Artificial Intelligence, Sensor Networks and Social Media. We shall analyse capabilities and business value of these technologies and tools. As we recognise, most technologies, after completing their commercial journey, are utilised by the business world in physical as well as in the virtual marketing environments. We shall also look at the social impact of some of these technologies and tools.

Similar content being viewed by others

essay on technology in 21st century

Integration of Artificial Intelligence and the Internet of Things with Blockchain Technology

essay on technology in 21st century

Known Unknowns in an Era of Technological and Viral Disruptions—Implications for Theory, Policy, and Practice

essay on technology in 21st century

Internet of Things: A Review on Its Applications

Avoid common mistakes on your manuscript.

1 Introduction

Internet, which was started in 1989 [ 1 ], now has 1.2 million terabyte data from Google, Amazon, Microsoft and Facebook [ 2 ]. It is estimated that the internet contains over four and a half billion websites on the surface web, the deep web, which we know very little about, is at least four hundred times bigger than the surface web [ 3 ]. Soon afterwards in 1990, email platform emerged and then many applications. Then we saw a chain of web 2.0 technologies like E-commerce, which started, social media platforms, E-Business, E-Learning, E-government, Cloud Computing and more from 1995 to the early 21st century [ 4 ]. Now we have a large number of internet based technologies which have uncountable applications in many domains including business, science and engineering, and healthcare [ 5 ]. The impact of these technologies on our personal lives is such that we are compelled to adopt many of them whether we like it or not.

In this article we shall study the nature, usage and capabilities of the emerging and future technologies. Some of these technologies are Big Data Analytics, Internet of Things (IoT), Sensor networks (RFID, Location based Services), Artificial Intelligence (AI), Robotics, Blockchain, Mobile digital Platforms (Digital Streets, towns and villages), Clouds (Fog and Dew) computing, Social Networks and Business, Virtual reality.

With the ever increasing computing power and declining costs of data storage, many government and private organizations are gathering enormous amounts of data. Accumulated data from the years’ of acquisition and processing in many organizations has become enormous meaning that it can no longer be analyzed by traditional tools within a reasonable time. Familiar disciplines to create Big data include astronomy, atmospheric science, biology, genomics, nuclear physics, biochemical experiments, medical records, and scientific research. Some of the organizations responsible to create enormous data are Google, Facebook, YouTube, hospitals, proceedings of parliaments, courts, newspapers and magazines, and government departments. Because of its size, analysis of big data is not a straightforward task and often requires advanced methods and techniques. Lack of timely analysis of big data in certain domains may have devastating results and pose threats to societies, nature and echo system.

2.1 Big medic data

Healthcare field is generating big data, which has the potential to surpass other fields when it come to the growth of data. Big Medic data usually refers to considerably bigger pool of health, hospital and treatment records, medical claims of administrative nature, and data from clinical trials, smartphone applications, wearable devices such as RFID and heart beat reading devices, different kinds of social media, and omics-research. In particular omics-research (genomics, proteomics, metabolomics etc.) is leading the charge to the growth of Big data [ 6 , 7 ]. The challenges in omics-research are data cleaning, normalization, biomolecule identification, data dimensionality reduction, biological contextualization, statistical validation, data storage and handling, sharing and data archiving. Data analytics requirements include several tasks like those of data cleaning, normalization, biomolecule identification, data dimensionality reduction, biological contextualization, statistical validation, data storage and handling, sharing and data archiving. These tasks are required for the Big data in some of the omics datasets like genomics, tran-scriptomics, proteomics, metabolomics, metagenomics, phenomics [ 6 ].

According to [ 8 ], in 2011 alone, the data in the United States of America healthcare system amounted to one hundred and fifty Exabyte (One Exabyte = One billion Gigabytes, or 10 18  Bytes), and is expected soon reach to 10 21 and later 10 24 . Some scientist have classified Medical into three categories having (a) large number of samples but small number of parameters; (b) small number of samples and small number of parameters; (c) large small number of samples and small number of parameters [ 9 ]. Although the data in the first category may be analyzed by classical methods but it may be incomplete, noisy, and inconsistent, data cleaning. The data in the third category could be big and may require advanced analytics.

2.2 Big data analytics

Big data cannot be analyzed in real time by traditional analytical methods. The analysis of Big data, popularly known as Big Data Analytics, often involves a number of technologies, sophisticated processes and tools as depicted in Fig.  1 . Big data can provide smart decision making and business intelligence to the businesses and corporations. Big data unless analyzed is impractical and a burden to the organization. Big data analytics involves mining and extracting useful associations (knowledge discovery) for intelligent decision-making and forecasts. The challenges in Big Data analytics are computational complexities, scalability and visualization of data. Consequently, the information security risk increases with the surge in the amount of data, which is the case in Big Data.

figure 1

Big Data Analytics

The aim of data analytics has always been knowledge discovery to support smart and timely decision making. With big data, knowledge base becomes widened and sharper to provide greater business intelligence and assist businesses in becoming a leader in the market. Conventional processing paradigm and architecture are inefficient to deal with the large datasets from the Big data. Some of the problems of Big Data are to deal with the size of data sets in Big Data, requiring parallel processing. Some of the recent technologies like Spark, Hadoop, Map Reduce, R, Data Lakes and NoSQL have emerged to provide Big Data analytics. With all these and other data analytics technologies, it is advantageous to invest in designing superior storage systems.

Health data predominantly consists of visual, graphs, audio and video data. Analysing such data to gain meaningful insights and diagnoses may depend on the choice of tools. Medical data has traditionally been scattered in the organization, often not organized properly. What we find usually are medical record keeping systems which consist of heterogeneous data, requiring more efforts to reorganize the data into a common platform. As discussed before, the health profession produces enormous data and so analysing it in an efficient and timely manner can potentially save many lives.

Commercial operations of Clouds from the company platforms began in the year 1999 [ 10 ]. Initially, clouds complemented and empowered outsourcing. At earlier stages, there were some privacy concerns associated with Cloud Computing as the owners of data had to give the custody of their data to the Cloud owners. However, as time passed, with confidence building measures by Cloud owners, the technology became so prevalent that most of the world’s SMEs started using it in one or the other form. More information on Cloud Computing can be found in [ 11 , 12 ].

3.1 Fog computing

As faster processing became the need for some critical applications, the clouds regenerated Fog or Edge computing. As can be seen in Gartner hyper cycles in Figs.  2 and 3 , Edge computing, as an emerging technology, has also peaked in 2017–18. As shown in the Cloud Computing architecture in Fig.  4 , the middle or second layers of the cloud configuration are represented by the Fog computing. For some applications delay in communication between the computing devices in the field and data in a Cloud (often physically apart by thousands of miles), is detrimental of the time requirements, as it may cause considerable delay in time sensitive applications. For example, processing and storage for early warning of disasters (stampedes, Tsunami, etc.) must be in real time. For these kinds of applications, computing and storing resources should be placed closer to where computing is needed (application areas like digital street). In these kind of scenarios Fog computing is considered to be suitable [ 13 ]. Clouds are integral part of many IoT applications and play central role on ubiquitous computing systems in health related cases like the one depicted in Fig.  5 . Some applications of Fog computing can be found in [ 14 , 15 , 16 ]. More results on Fog computing are also available in [ 17 , 18 , 19 ].

figure 2

Emerging Technologies 2018

figure 3

Emerging Technologies 2017

figure 4

Relationship of Cloud, Fog and Dew computing 

figure 5

Snapshot of a Ubiquitous System

3.2 Dew computing

When Fog is overloaded and is not able to cater for the peaks of high demand applications, it offloads some of its data and/or processing to the associated cloud. In such a situation, Fog exposes its dependency to a complementary bottom layer of the cloud architectural organisation as shown in the Cloud architecture of Fig.  4 . This bottom layer of hierarchical resources organization is known as the Dew layer. The purpose of the Dew layer is to cater for the tasks by exploiting resources near to the end-user with minimum internet access [ 17 , 20 ]. As a feature, Dew computing takes care of determining as to when to use for its services linking with the different layers of the Cloud architecture. It is also important to note that the Dew computing [ 20 ] is associated with the distributed computing hierarchy and is integrated by the Fog computing services, which is also evident in Fig.  4 . In summary, Cloud architecture has three layers, first being Cloud, second as Fog and the third Dew.

4 Internet of things

Definition of Internet of Things (IoT), as depicted in Fig.  6 , has been changing with the passage of time. With growing number of internet based applications, which use many technologies, devices and tools, one would think, the name of IoT seems to have evolved. Accordingly, things (technologies, devices and tools) used together in internet based applications to generate data to provide assistance and services to the users from anywhere, at any time. The internet can be considered as a uniform technology from any location as it provides the same service of ‘connectivity’. The speed and security however are not uniform. The IoT as an emerging technology has peaked during 2017–18 as is evident from Figs.  2 and 3 . This technology is expanding at a very fast rate. According to [ 21 , 22 , 23 , 24 ], the number of IoT devices could be in millions by the year 2021.

figure 6

Internet of Things

IoT is providing some amazing applications in tandem with wearable devices, sensor networks, Fog computing, and other technologies to improve some the critical facets of our lives like healthcare management, service delivery, and business improvements. Some applications of IoT in the field of crowd management are discussed in [ 14 ]. Some applications in of IoT in the context of privacy and security are discussed in [ 15 , 16 ]. Some of the key devices and associated technologies to IoT include RFID Tags [ 25 ], Internet, computers, cameras, RFID, Mobile Devices, coloured lights, RFIDs, Sensors, Sensor networks, Drones, Cloud, Fog and Dew.

5 Applications of blockchain

Blockchain is usually associated with Cryptocurrencies like Bitcoin (Currently, there are over one and a half thousand cryptocurrencies and the numbers are still rising). But the Blockchain technology can also be used for many more critical applications of our daily lives. The Blockchain is a distributed ledger technology in the form of a distributed transactional database, secured by cryptography, and governed by a consensus mechanism. A Blockchain is essentially a record of digital events [ 26 ]. A block represents a completed transaction or ledger. Subsequent and prior blocks are chained together, displaying the status of the most recent transaction. The role of chain is to provide linkage between records in a chronological order. This chain continues to grow as and when further transactions take place, which are recorded by adding new blocks to the chain. User security and ledger consistency in the Blockchain is provided by Asymmetric cryptography and distributed consensus algorithms. Once a block is created, it cannot be altered or removed. The technology eliminates the need for having a bank statement for verification of the availability of funds or that of a lawyer for certifying the occurrence of an event. The benefits of Blockchain technology are inherited in its characteristics of decentralization, persistency, anonymity and auditability [ 27 , 28 ].

5.1 Blockchain for business use

Blockchain, being the technology behind Cryptocurrencies, started as an open-source Bitcoin community to allow reliable peer-to-peer financial transactions. Blockchain technology has made it possible to build a globally functional currency relying on code, without using any bank or third-party platforms [ 28 ]. These features have made the Blockchain technology, secure and transparent for business transactions of any kind involving any currencies. In literature, we find many applications of Blockchain. Nowadays, the applications of Blockchain technology involve various kinds of transactions requiring verification and automated system of payments using smart contracts. The concept of Smart Contacts [ 28 ] has virtually eliminated the role of intermediaries. This technology is most suitable for businesses requiring high reliability and honesty. Because of its security and transparency features, the technology would benefit businesses trying to attract customers. Blockchain can be used to eliminate the occurrence of fake permits as can be seen in [ 29 ].

5.2 Blockchain for healthcare management

As discussed above, Blockchain is an efficient and transparent way of digital record keeping. This feature is highly desirable in efficient healthcare management. Medical field is still undergoing to manage their data efficiently in a digital form. As usual the issues of disparate and non-uniform record storage methods are hampering the digitization, data warehouse and big data analytics, which would allow efficient management and sharing of the data. We learn the magnitude of these problem from examples of such as the target of the National Health Service (NHS) of the United Kingdom to digitize the UK healthcare is by 2023 [ 30 ]. These problems lead to inaccuracies of data which can cause many issues in healthcare management, including clinical and administrative errors.

Use of Blockchain in healthcare can bring revolutionary improvements. For example, smart contracts can be used to make it easier for doctors to access patients’ data from other organisations. The current consent process often involves bureaucratic processes and is far from being simplified or standardised. This adds to many problems to patients and specialists treating them. The cost associated with the transfer of medical records between different locations can be significant, which can virtually be reduced to zero by using Blockchain. More information on the use of Blockchain in the healthcare data can be found in [ 30 , 31 ].

6 Environment cleaning robot

One of the ongoing healthcare issue is the eradication of deadly viruses and bacteria from hospitals and healthcare units. Nosocomial infections are a common problem for hospitals and currently they are treated using various techniques [ 32 , 33 ]. Historically, cleaning the hospital wards and operating rooms with chlorine has been an effective way. On the face of some deadly viruses like EBOLA, HIV Aids, Swine Influenza H1N1, H1N2, various strands of flu, Severe Acute Respiratory Syndrome (SARS) and Middle Eastern Respiratory Syndrome (MERS), there are dangerous implications of using this method [ 14 ]. An advanced approach is being used in the USA hospitals, which employs “robots” to purify the space as can be seen in [ 32 , 33 ]. However, certain problems exist within the limitations of the current “robots”. Most of these devices require a human to place them in the infected areas. These devices cannot move effectively (they just revolve around themselves); hence, the UV light will not reach all areas but only a very limited area within the range of the UV light emitter. Finally, the robot itself maybe infected as the light does not reach most of the robot’s surfaces. Therefore, there is an emerging need to build a robot that would not require the physical presence of humans to handle it, and could purify the entire room by covering all the room surfaces with UV light while, at the same time, will not be infected itself.

Figure  7 is an overview of the design of a fully motorized spider robot with six legs. This robot supports Wi-Fi connectivity for the purpose of control and be able to move around the room and clean the entire area. The spider design will allow the robot to move in any surface, including climbing steps but most importantly the robot will use its legs to move the UV light emitter as well as clear its body before leaving the room. This substantially reduces the risk of the robot transmitting any infections.

figure 7

Spider Robot for virus cleaning

Additionally, the robot will be equipped with a motorized camera allowing the operator to monitor space and stop the process of emitting UV light in case of unpredicted situations. The operator can control the robot via a networked graphical user interface and/or from an augmented reality environment which will utilize technologies such as the Oculus Touch. In more detail, the user will use the oculus rift virtual reality helmet and the oculus touch, as well as hand controllers to remote control the robot. This will provide the user with the vision of the robot in a natural manner. It will also allow the user to control the two front robotic arms of the spider robot via the oculus touch controller, making it easy to do conduct advance movements, simply by move the hands. The physical movements of the human hand will be captured by the sensors of oculus touch and transmitted to the robot. The robot will then use reverse kinematics to translate the actions and position of the human hand to movements of the robotic arm. This technique will also be used during the training phase of the robot, where the human user will teach the robot how to clean various surfaces and then purify itself, simply by moving their hands accordingly. The design of the spider robot was proposed in a project proposal submitted to the King Abdulaziz City of Science and Technology ( https://www.kacst.edu.sa/eng/Pages/default.aspx ) by the author and George Tsaramirsis ( https://www.researchgate.net/profile/George_Tsaramirsis ).

7 Conclusions

We have presented details of some of the emerging technologies and real life application, that are providing businesses remarkable opportunities, which were previously unthinkable. Businesses are continuously trying to increase the use of new technologies and tools to improve processes, to benefit their client. The IoT and associated technologies are now able to provide real time and ubiquitous processing to eliminate the need for human surveillance. Similarly, Virtual Reality, Artificial Intelligence robotics are having some remarkable applications in the field of medical surgeries. As discussed, with the help of the technology, we now can predict and mitigate some natural disasters such as stampedes with the help of sensor networks and other associated technologies. Finally, the increase in Big Data Analytics is influencing businesses and government agencies with smarter decision making to achieve targets or expectations.

Naughton John (2016) The evolution of the internet: from military experiment to general purpose technology. J Cyber Policy 1(1):5–28. https://doi.org/10.1080/23738871.2016.1157619

Article   Google Scholar  

Gareth Mitchell (2019). How much data is on the internet? Science Focus (The Home of BBC Science Focus Magazine). [Online]. https://www.sciencefocus.com/future-technology/how-much-data-is-on-the-internet/ . Accessed 20 April 2019

Mae Rice (2018). The deep web is the 99% of the internet you can’t google. Curiosity. [Online]. https://curiosity.com/topics/the-deep-web-is-the-99-of-the-internet-you-cant-google-curiosity/ . Accessed 20 April 2019

Slumkoski C (2012) History on the internet 2.0: the rise of social media. Acadiensis 41(2):153–162 (Summer/Autumn-ÉTÉ/Automne)

Google Scholar  

Ibarra-Esquer JE, González-Navarro FF, Flores-Rios BL, Burtseva L, María A, Astorga-Vargas M (2017) Tracking the evolution of the internet of things concept across different application domains. Sensors (Basel) 17(6):1379. https://doi.org/10.3390/s17061379

Misra Biswapriya B, Langefeld Carl, Olivier Michael, Cox Laura A (2018) Integrated omics: tools, advances and future approaches. J Mol Endocrinol 62(1):R21–R45. https://doi.org/10.1530/JME-18-0055

Lee Choong Ho, Yoon Hyung-Jin (2017) Medical big data: promise and challenges. Kidney Res Clin Practice 36(1):3–13. https://doi.org/10.23876/j.krcp.2017.36.1.3

Faggella D (2019). Where healthcare’s big data actually comes from. Emerj. [Onlone]. Last accessed from https://emerj.com/ai-sector-overviews/where-healthcares-big-data-actually-comes-from/

Sinha A, Hripcsak G, Markatou M (2009) Large datasets in biomedicine: a discussion of salient analytic issues. J Am Med Inform Assoc 16:759–767. https://doi.org/10.1197/jamia.M2780

Keith D. Foote a brief history of cloud computing. DATAVERSITY, 2017. {Online]. Last accessed on 5/5/2019 from A Brief History of Cloud Computing

Vassakis K, Petrakis E, Kopanakis I (2018) Big data analytics: applications, prospects and challenges. In: Skourletopoulos G, Mastorakis G, Mavromoustakis C, Dobre C, Pallis E (eds) Mobile big data. Lecture notes on data engineering and communications technologies, vol 10. Springer, Cham

Yamin M, Al Makrami AA (2015) Cloud computing in SMEs: case of Saudi Arabia. BIJIT—BVICAM’s Int J Inform Technol 7(1):853–860

Ahmed E, Ahmed A, Yaqoob I, Shuja J, Gani A, Imran M, Shoaib M (2017) Bringing computation closer toward the user network: is edge computing the solution? IEEE Commun Mag 55:138–144

Yamin M, Basahel AM, Abi Sen AA (2018) Managing crowds with wireless and mobile technologies. Hindawi. Wireless Commun Mobile Comput. Volume 2018, Article ID 7361597, pp 15. https://doi.org/10.1155/2018/7361597

Yamin M, Abi Sen AA (2018) Improving privacy and security of user data in location based services. Int J Ambient Comput Intell 9(1):19–42. https://doi.org/10.4018/IJACI.2018010102

Sen AAA, Eassa FA, Jambi K, Yamin M (2018) Preserving privacy in internet of things—a survey. Int J Inform Technol 10(2):189–200. https://doi.org/10.1007/s41870-018-0113-4

Longo Mathias, Hirsch Matías, Mateos Cristian, Zunino Alejandro (2019) Towards integrating mobile devices into dew computing: a model for hour-wise prediction of energy availability. Information 10(3):86. https://doi.org/10.3390/info10030086

Nunna S, Kousaridas A, Ibrahim M, Dillinger M, Thuemmler C, Feussner H, Schneider A Enabling real-time context-aware collaboration through 5G and mobile edge computing. In: Proceedings of the 12th international conference on information technology-new generations, Las Vegas, NV, USA, 13–15 April 2015; pp 601–605

Vaquero LM, Rodero-Merino L (2014) Finding your way in the fog: towards a comprehensive definition of fog computing. SIGCOMM Comput Commun Rev 44:27–32

Ray PP (2019) Minimizing dependency on internetwork: Is dew computing a solution? Trans Emerg Telecommun Technol 30:e3496

Bonomi F, Milito R, Zhu J, Addepalli S Fog computing and its role in the internet of things. In: Proceedings of the first edition of the workshop on mobile cloud computing, Helsinki, Finland, 17 August 2012; pp 13–16. [Google Scholar]

Jia X, Feng Q, Fan T, Lei Q, RFID technology and its applications in internet of things (IoT), consumer electronics, communications and networks (CECNet). In: 2nd international conference proceedings, pp 1282–1285. IEEE, 2012, https://doi.org/10.1109/cecnet.2012.6201508

Said O, Masud M (2013) Towards internet of things: survey and future vision. Int J Comput Netw 5(1):1–17

Gubbi J, Buyya R, Marusic S, Palaniswami M (2013) Internet of things (IoT): a vision, architectural elements, and future directions. Future Gener Comput Syst 29(7):1645–1660. https://doi.org/10.1016/j.future.2013.01.010

Beck R, Avital M, Rossi M et al (2017) Blockchain technology in business and information systems research. Bus Inf Syst Eng 59:381. https://doi.org/10.1007/s12599-017-0505-1

Yamin M (2018) Managing crowds with technology: cases of Hajj and Kumbh Mela. Int J Inform Technol. https://doi.org/10.1007/s41870-018-0266-1

Zheng Z, Xie S, Dai H, Chen X, Wang H An overview of blockchain technology: architecture, consensus, and future trends. https://www.researchgate.net/publication/318131748_An_Overview_of_Blockchain_Technology_Architecture_Consensus_and_Future_Trends . Accessed May 01 2019

Al-Saqafa W, Seidler N (2017) Blockchain technology for social impact: opportunities and challenges ahead. J Cyber Secur Policy. https://doi.org/10.1080/23738871.2017.1400084

Alotaibi M, Alsaigh M, Yamin M (2019) Blockchain for controlling Hajj and Umrah permits. Int J Comput Sci Netw Secur 19(4):69–77

Vazirani AA, O’Donoghue O, Brindley D (2019) Implementing blockchains for efficient health care: systematic review. J Med Internet Res 21(2):12439. https://doi.org/10.2196/12439

Yamin M (2018) IT applications in healthcare management: a survey. Int J Inform Technol 10(4):503–509. https://doi.org/10.1007/s41870-018-0203-3

Begić A. (2018) Application of Service Robots for Disinfection in Medical Institutions. In: Hadžikadić M., Avdaković S. (eds) Advanced Technologies, Systems, and Applications II. IAT (2017) Lecture Notes in Networks and Systems, vol 28. Springer, Cham

Mettler T, Sprenger M, Winter R (2017) Service robots in hospitals: new perspectives on niche evolution and technology affordances. Euro J Inform Syst. 10:11. https://doi.org/10.1057/s41303-017-0046-1

Download references

Author information

Authors and affiliations.

Department of MIS, Faculty of Economics and Admin, King Abdulaziz University, Jeddah, Saudi Arabia

Mohammad Yamin

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Mohammad Yamin .

Rights and permissions

Reprints and permissions

About this article

Yamin, M. Information technologies of 21st century and their impact on the society. Int. j. inf. tecnol. 11 , 759–766 (2019). https://doi.org/10.1007/s41870-019-00355-1

Download citation

Received : 05 May 2019

Accepted : 09 August 2019

Published : 16 August 2019

Issue Date : December 2019

DOI : https://doi.org/10.1007/s41870-019-00355-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Emerging and future technologies
  • Internet of things
  • Sensor networks
  • Location based services
  • Mobile digital platforms
  • Find a journal
  • Publish with us
  • Track your research
  • EssayBasics.com
  • Pay For Essay
  • Write My Essay
  • Homework Writing Help
  • Essay Editing Service
  • Thesis Writing Help
  • Write My College Essay
  • Do My Essay
  • Term Paper Writing Service
  • Coursework Writing Service
  • Write My Research Paper
  • Assignment Writing Help
  • Essay Writing Help
  • Call Now! (USA) Login Order now
  • EssayBasics.com Call Now! (USA) Order now
  • Writing Guides

21st Century Communication Technology Essay

  • To find inspiration for your paper and overcome writer’s block
  • As a source of information (ensure proper referencing)
  • As a template for you assignment

The changing communication technology and the presence of the internet have greatly impacted the way firms conduct business. It is now possible to conduct business using resources that are virtual in nature while still earning a reasonable revenue of profit and revenue from the operations with minimal investments. The communications technology has dramatically changed the way people in a company interact and communicate with each other for business as well as personal purposes.

The most common forms of technology that have been used over the period of time for communication in a company pertain to face to face communication, memos, letters, bulletin boards as well as financial reports. The selection of type of media is based on the purpose of the communication and the audience being targeted. Face to face communication is personal in nature and an immediate form of communication where a two way flow of ideas is possible.

On the other hand, communication media like bulletin boards and financial reports are drawn up for a certain audience targeting mass reach. In the 21st century however it is now possible to conduct business and communicate with the employees using innovative technologies like email, SMS, video conferencing and hand held devices like PDA’s and BlackBerry (Lengel & Daft, 1988) The use of this technology can also help the company increase two way communication in the management making way for an efficient flow of ideas. Strategic implementation of the media can help in connecting with the lower management and performing any conflict resolution that would otherwise go untreated leading to increase in employee dissatisfaction (‘Whispering Class Must Be Heard’, 2008)

A firm that works on the tax returns for clients needs to communicate with the clients and their staff in an efficient and immediate manner for resolving any issues that may come up during the drawing of papers and the pre[parathion of tax returns. In this regard it is beneficial for the fri9mtomake use of modern communication technology for communicating with their clients and their staff. The firm can make use of SMS to communicate with their staff and inform of any urgent meetings to them.

The SMS option can also be used to inform the clients about any sudden change in plans or to schedule a meeting with them where direct communication at the moment is not possible. Aside from this Email is a option that can be employed to provide the clients with updates in their tax returns and inform of any discrepancies and issues that may come up. The staff can also be delegated work and kept in the work loop using detailed emails with attachments for tax return evidence etc.

The video conferencing option can be used to establish a communication link between the client and the staff working on the tax returns for face to face meetings where a direct face to face meeting is not possible due to geographic or time constraints.

While the modern communication media can be expensive to acquire and use in the firm, it is important to note as well, that its use and implementation can help the firm attain competitive advantage in operations through greater efficiency and increased personal services that it can offer to its customers. The 21st century communication media can be used to strategically motivate and reward the employees where instead of providing them with cash bonus or raise, a BlackBerry or an iPod can be provided. (‘Rewarding a Job Well Done’, 2008) This helps increase the motivation of the employees with returns that are substantial in nature and can be used for business purposes as well.

‘Rewarding a Job Well Done’, LW , 2008.

‘Whispering Class Must Be Heard’, 2008.

Lengel, R.H., Daft, R.L., ‘The Selection of Communication Media as an Executive Skill’, Academy of Management Executive , 1998, 2, no. 3, pp. 225-32.

  • The Social Implications of the Blackberry Technology
  • Blackberry's Organizational Prospects
  • SMS Traffic Jam Alert for Drives Across UAE
  • Peer Pressure: Issue Review
  • Two Friends Who Are Not Speaking to Each Other
  • "The Effects of Sexual Harassment on Job Satisfaction" by Laband and Lentz
  • Boys and Girls Misunderstandings: Personal Case
  • Effects of Laughter on People
  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2021, November 10). 21st Century Communication Technology. https://ivypanda.com/essays/21st-century-communication-technology/

"21st Century Communication Technology." IvyPanda , 10 Nov. 2021, ivypanda.com/essays/21st-century-communication-technology/.

IvyPanda . (2021) '21st Century Communication Technology'. 10 November.

IvyPanda . 2021. "21st Century Communication Technology." November 10, 2021. https://ivypanda.com/essays/21st-century-communication-technology/.

1. IvyPanda . "21st Century Communication Technology." November 10, 2021. https://ivypanda.com/essays/21st-century-communication-technology/.

Bibliography

IvyPanda . "21st Century Communication Technology." November 10, 2021. https://ivypanda.com/essays/21st-century-communication-technology/.

  • Open access
  • Published: 04 December 2018

The computer for the 21st century: present security & privacy challenges

  • Leonardo B. Oliveira 1 ,
  • Fernando Magno Quintão Pereira 2 ,
  • Rafael Misoczki 3 ,
  • Diego F. Aranha 4 ,
  • Fábio Borges 5 ,
  • Michele Nogueira 6 ,
  • Michelle Wangham 7 ,
  • Min Wu 8 &
  • Jie Liu 9  

Journal of Internet Services and Applications volume  9 , Article number:  24 ( 2018 ) Cite this article

34k Accesses

7 Citations

2 Altmetric

Metrics details

Decades went by since Mark Weiser published his influential work on the computer of the 21st century. Over the years, some of the UbiComp features presented in that paper have been gradually adopted by industry players in the technology market. While this technological evolution resulted in many benefits to our society, it has also posed, along the way, countless challenges that we have yet to surpass. In this paper, we address major challenges from areas that most afflict the UbiComp revolution:

Software Protection: weakly typed languages, polyglot software, and networked embedded systems.

Long-term Security: recent advances in cryptanalysis and quantum attacks.

Cryptography Engineering: lightweight cryptosystems and their secure implementation.

Resilience: issues related to service availability and the paramount role of resilience.

Identity Management: requirements to identity management with invisibility.

Privacy Implications: sensitivity data identification and regulation.

Forensics: trustworthy evidence from the synergy of digital and physical world.

We point out directions towards the solutions of those problems and claim that if we get all this right, we will turn the science fiction of UbiComp into science fact.

1 Introduction

In 1991, Mark Weiser described a vision of the Computer for the 21st Century [ 1 ]. Weiser, in his prophetic paper, argued the most far-reaching technologies are those that allow themselves to disappear, vanish into thin air. According to Weiser, this oblivion is a human – not a technological – phenomenon: “Whenever people learn something sufficiently well, they cease to be aware of it,” he claimed. This event is called “tacit dimension” or “compiling” and can be witnessed, for instance, when drivers react to street signs without consciously having to process the letters S-T-O-P [ 1 ].

A quarter of a century later, however, Weiser’s dream is far from becoming true. Over the years, many of his concepts regarding pervasive and ubiquitous computing (UbiComp) [ 2 , 3 ] have been materialized into what today we call Wireless Sensor Networks [ 4 , 5 ], Internet of Things [ 6 , 7 ], Wearables [ 8 , 9 ], and Cyber-Physical Systems [ 10 , 11 ]. The applications of these systems range from traffic accident and CO 2 emission monitoring to autonomous automobile and patient in-home care. Nevertheless, besides all their benefits, the advent of those systems per se have also brought about some drawbacks. And, unless we address them appropriately, the continuity of Weiser’s prophecy will be at stake.

UbiComp poses new drawbacks because, vis-à-vis traditional computing, it exhibits an entirely different outlook [ 12 ]. Computer systems in UbiComp, for instance, feature sensors, CPU, and actuators. Respectively, this means they can hear (or spy on) the user, process her/his data (and, possibly, find out something confidential about her/him), and respond to her/his actions (or, ultimately, expose she/he by revealing some secret). Those capabilities, in turn, make proposals for conventional computers ill-suited in the UbiComp setting and present new challenges.

In the above scenarios, some of the most critical challenges lie in the areas of Security and Privacy [ 13 ]. This is so because the market and users often pursue a system full of features at the expense of proper operation and protection; although, conversely, as computing elements pervade our daily lives, the demand for stronger security schemes becomes greater than ever. Notably, there is a dire need for a secure mechanism able to encompass all aspects and manifestations of UbiComp, across time as well as space, and in a seamless and efficient manner.

In this paper, we discuss contemporary security and privacy issues in the context of UbiComp (Fig.  1 ). We examine multiple research problems still open and point to promising approaches towards their solutions. More precisely, we investigate the following challenges and their ramifications.

figure 1

Current security and privacy issues in UbiComp

Software protection in Section 2 : we study the impact of the adoption of weakly typed languages by resource-constrained devices and discuss mechanisms to mitigate this impact. We go over techniques to validate polyglot software (i.e., software based on multiple programming languages), and revisit promising methods to analyze networked embedded systems.

Long-term security in Section 3 : we examine the security of today’s widely used cryptosystems (e.g., RSA and ECC-based), present some of the latest threats (e.g., the advances in cryptanalysis and quantum attacks), and explore new directions and challenges to guarantee long-term security in the UbiComp setting.

Cryptography engineering in Section 4 : we restate the essential role of cryptography in safeguarding computers, discuss the status quo of lightweight cryptosystems and their secure implementation, and highlight challenges in key management protocols.

Resilience in Section 5 : we highlight issues related to service availability and we reinforce the importance of resilience in the context of UbiComp.

Identity Management in Section 6 : we examine the main requirements to promote identity management (IdM) in UbiComp systems to achieve invisibility, revisit the most used federated IdM protocols, and explore open questions, research opportunities to provide a proper IdM approach for pervasive computing.

Privacy implications in Section 7 : we explain why security is necessary but not sufficient to ensure privacy, go over important privacy-related issues (e.g., sensitivity data identification and regulation), and discuss some tools of the trade to fix those (e.g., privacy-preserving protocols based on homomorphic encryption).

Forensics in Section 8 we present the benefit of the synergistic use of physical and digital evidences to facilitate trustworthy operations of cyber systems.

We believe that only if we tackle these challenges right, we can turn the science fiction of UbiComp into science fact.

Particularly, we choose to address the areas above because they represent promising research directions e cover different aspects of UbiComp security and privacy.

2 Software protection

Modern UbiComp systems are rarely built from scratch. Components developed by different organizations, with different programming models and tools, and under different assumptions are integrated to offer complex capabilities. In this section, we analyze the software ecosystem that emerges from such a world. Figure  2 provides a high-level representation of this ecosystem. In the rest of this section, we shall focus specially on three aspects of this environment, which pose security challenges to developers: the security shortcomings of C and C++, the dominant programming languages among cyber-physical implementations; the interactions between these languages and other programming languages, and the consequences of these interactions on the distributed nature of UbiComp applications. We start by diving deeper into the idiosyncrasies of C and C++.

figure 2

A UbiComp System is formed by modules implemented as a combination of different programming languages. This diversity poses challenges to software security

2.1 Type safety

A great deal of the software used in UbiComp systems is implemented in C or in C++. This fact is natural, given the unparalleled efficiency of these two programming languages. However, if, on the one hand, C and C++ yield efficient executables, on the other hand, their weak type system gives origin to a plethora of software vulnerabilities. In programming language’s argot, we say that a type system is weak when it does not support two key properties: progress and preservation [ 14 ]. The formal definitions of these properties are immaterial for the discussion that follows. It suffices to know that, as a consequence of weak typing, neither C, nor C++, ensure, for instance, bounded memory accesses. Therefore, programs written in these languages can access invalid memory positions. As an illustration of the dangers incurred by this possibility, it suffices to know that out-of-bounds access are the principle behind buffer overflow exploits.

The software security community has been developing different techniques to deal with the intrinsic vulnerabilities of C/C++/assembly software. Such techniques can be fully static, fully dynamic or a hybrid of both approaches. Static protection mechanisms are implemented at the compiler level; dynamic mechanisms are implemented at the runtime level. In the rest of this section, we list the most well-known elements in each category.

Static analyses provide a conservative estimate of the program behavior, without requiring the execution of such a program. This broad family of techniques includes, for instance, abstract interpretation [ 15 ], model checking [ 16 ] and guided proofs [ 17 ]. The main advantage of static analyses is the low runtime overhead, and its soundness: inferred properties are guaranteed to always hold true. However, static analyses have also disadvantages. In particular, most of the interesting properties of programs lay on undecidable land [ 18 ]. Furthermore, the verification of many formal properties, even though a decidable problem, incur a prohibitive computational cost [ 19 ].

Dynamic analyses come in several flavors: testing (KLEE [ 20 ]), profiling (Aprof [ 21 ], Gprof [ 22 ]), symbolic execution (DART [ 23 ]), emulation (Valgrind [ 24 ]), and binary instrumentation (Pin [ 25 ]). The virtues and limitations of dynamic analyses are exactly the opposite of those found in static techniques. Dynamic analyses usually do not raise false alarms: bugs are described by examples, which normally lead to consistent reproduction [ 26 ]. However, they are not required to always find security vulnerabilities in software. Furthermore, the runtime overhead of dynamic analyses still makes it prohibitive to deploy them into production software [ 27 ].

As a middle point, several research groups have proposed ways to combine static and dynamic analyses, producing different kinds of hybrid approaches to secure low-level code. This combination might yield security guarantees that are strictly more powerful than what could be obtained by either the static or the dynamic approaches, when used separately [ 28 ]. Nevertheless, negative results still hold: if an attacker can take control of the program, usually he or she can circumvent state-of-the-art hybrid protection mechanisms, such as control flow integrity [ 29 ]. This fact is, ultimately, a consequence of the weak type system adopted by languages normally seen in the implementation of UbiComp systems. Therefore, the design and deployment of techniques that can guard such programming languages, without compromising their efficiency to the point where they will no longer be adequate to UbiComp development, remains an open problem.

In spite of the difficulties of bringing formal methods to play a larger role in the design and implementation of programming languages, much has already been accomplished in this field. Testimony to this statement is the fact that today researchers are able to ensure the safety of entire operating system kernels, as demonstrated by Gerwin et al. [ 30 ], and to ensure that compilers meet the semantics of the languages that they process [ 31 ]. Nevertheless, it is reasonable to think that certain safety measures might come at the cost of performance and therefore we foresee that much of the effort of the research community in the coming years will be dedicated to making formal methods not only more powerful and expressive, but also more efficient to be used in practice.

2.2 Polyglot programming

Polyglot programming is the art and discipline of writing source code that involves two or more programming languages. It is common among implementations of cyber-physical systems. As an example, Ginga, the Brazilian protocol for digital TV, is mostly implemented in Lua and C [ 32 ]. Figure  3 shows an example of communication between a C and a Lua program. Other examples of interactions between programming languages include bindings between C and Python [ 33 ], C and Elixir [ 34 ] and the Java Native Interface [ 35 ]. Polyglot programming complicates the protection of systems. Difficulties arise due to a lack of multi-language tools and due to unchecked memory bindings between C/C++ and other languages.

figure 3

Two-way communication between a C and a Lua program

An obstacle to the validation of polyglot software is the lack of tools that analyze source code written in different programming languages, under a unified framework. Returning to Fig.  3 , we have a system formed by two programs, written in different programming languages. Any tool that analyzes this system as a whole must be able to parse these two distinct syntaxes and infer the connection points between them. Work has been performed towards this end, but solutions are still very preliminary. As an example, Maas et al. [ 33 ] have implemented automatic ways to check if C arrays are correctly read by Python programs. As another example, Furr and Foster [ 36 ] have described techniques to ensure type-safety of OCaml-to-C and Java-to-C bindings.

A promising direction to analyze polyglot systems is based on the idea of compilation of source code partially available. This feat consists in the reconstruction of the missing syntax and the missing declarations necessary to produce a minimal version of the original program that can be analyzed by typical tools. The analysis of code partially available makes it possible to test parts of a polyglot program in separate, in a way to produce a cohesive view of the entire system. This technique has been demonstrated to yield analyzable Java source code [ 37 ], and compilable C code [ 38 ]. Notice that this type of reconstruction is not restricted to high-level programming languages. Testimony of this fact is the notion of micro execution , introduced by Patrice Godefroid [ 39 ]. Godefroid’s tool allows the testing of x86 binaries, even when object files are missing. Nevertheless, in spite of these developments, the reconstruction is still restricted to the static semantics of programs. The synthesis of behavior is a thriving discipline in computer science [ 40 ], but still far away from enabling the certification of polyglot systems.

2.3 Distributed programming

Ubiquitous computing systems tend to be distributed. It is even difficult to conceive any use for an application in this world that does not interact with other programs. And it is common knowledge that distributed programming opens up several doors to malicious users. Therefore, to make cyber-physical technology safer, security tools must be aware of the distributed nature of such systems. Yet, two main challenges stand in front of this requirement: the difficulty to build a holistic view of the distributed application, and the lack of semantic information bound to messages exchanged between processes that communicate through a network.

To be accurate, the analysis of a distributed system needs to account for the interactions between the several program parts that constitute this system [ 41 ]. Discovering such interactions is difficult, even if we restrict ourselves to code written in a single programming language. Difficulties stem from a lack of semantic information associated with operations that send and receive messages. In other words, such operations are defined as part of a library, not as part of the programming language itself. Notwithstanding this fact, there are several techniques that infer communication channels between different pieces of source code. As examples, we have the algorithms of Greg Bronevetsky [ 42 ], and Teixeira et al. [ 43 ], which build a distributed view of a program’s control flow graph (CFG). Classic static analyses work without further modification on this distributed CFG. However, the distributed CFG is still a conservative approximation of the program behavior. Thus, it forces already imprecise static analyses to deal with communication channels that might never exist during the execution of the program. The rising popularization of actor-based libraries, like those available in languages such as Elixir [ 34 ] and Scala [ 44 ] is likely to mitigate the channel-inference problem. In the actor model channels are explicit in the messages exchanged between the different processing elements that constitute a distributed system. Nevertheless, if such model will be widely adopted by the IoT community is still a fact to be seen.

Tools that perform automatic analyses in programs rely on static information to produce more precise results. In this sense, types are core for the understanding of software. For instance, in Java and other object-oriented languages, the type of objects determines how information flows along the program code. However, despite this importance, messages exchanged in the vast majority of distributed systems are not typed. Reason for this is the fact that such messages, at least in C, C++ and assembly software, are arrays of bytes. There have been two major efforts to mitigate this problem: the addition of messages as first class values to programming languages, and the implementation of points-to analyses able to deal with pointer arithmetics in languages that lack such feature. Concerning the first front, several programming languages, such as Scala, Erlang and Elixir, incorporate messages as basic constructs, providing developers with very expressive ways to implement the actor model [ 45 ] – a core foundation of distributed programming. Even though the construction of programming abstractions around the actor model is not a new idea [ 45 ], their raising popularity seems to be a phenomenon of the 2000’s, boosted by increasingly more expressive abstractions [ 46 ] and increasingly more efficient implementations [ 47 ]. In the second front, researchers have devised analyses that infer the contents [ 48 ] and the size of arrays [ 49 ] in weakly-typed programming languages. More importantly, recent years have seen a new flurry of algorithms designed to analyze C/C++ style pointer arithmetics [ 50 – 53 ]. The wide adoption of higher-level programming languages coupled with the construction of new tools to analyze lower-level languages is exciting. This trend seems to indicate that the programming languages community is dedicating each time more attention to the task of implementing safer distributed software. Therefore, even though the design of tools able to analyze the very fabric of UbiComp still poses several challenges to researchers, we can look to the future with optimism.

3 Long-term security

Various UbiComp systems are designed to withstand a lifespan of many years, even decades [ 54 , 55 ]. Systems in the context of critical infrastructure, for example, often require an enormous financial investment to be designed and deployed in the field [ 56 ], and therefore would offer a better return on investment if they remain in use for a longer period of time. The automotive area is a field of particular interest. Vehicles are expected to be reliable for decades [ 57 ], and renewing vehicle fleets or updating features ( recalls ) increase costs for their owners. Note that modern vehicles are part of the UbiComp ecosystem as they are equipped with embedded devices with Internet connectivity. In the future, it is expected that vehicles will depend even more on data collected and shared across other vehicles/infrastructure through wireless technologies [ 58 ] in order to enable enriched driving experiences such as autonomous driving [ 59 ].

It is also worth mentioning that systems designed to endure a lifespan of several years or decades might suffer from lack of future maintenance. The competition among players able to innovate is very aggressive leading to a high rate of companies going out of business within a few years [ 60 ]. A world inundate by devices without proper maintenance will offer serious future challenges [ 61 ].

From the few aforementioned examples, it is already evident that there is an increasing need for UbiComp systems to be reliable for a longer period of time and, whenever possible, requiring as few updates as possible. These requirements have a direct impact on the security features of such systems: comparatively speaking, they would offer fewer opportunities for patching eventual security breaches than conventional systems. This is a critical situation given the intense and dynamic progresses on devising and exploiting new security breaches. Therefore, it is of utmost importance to understand what the scientific challenges are to ensure long-term security from the early stage of the design of an UbiComp system, instead of resorting to palliative measures a posteriori.

3.1 Cryptography as the core component

Ensuring long-term security is a quite challenging task for any system, not only for UbiComp systems. At a minimum, it requires that every single security component is future-proof by itself and also when connected to other components. To simplify this excessively large attack surface and still be able to provide helpful recommendations, we will focus our attention on the main ingredient of most security mechanisms, as highlighted in Section 4 , i.e. Cryptography.

There are numerous types of cryptographic techniques. The most traditional ones rely on the hardness of computational problems such as integer factorization [ 62 ] and discrete logarithm problems [ 63 , 64 ]. These problems are believed to be intractable by current cryptanalysis techniques and the available technological resources. Because of that, cryptographers were able to build secure instantiation of cryptosystems based on such computational problems. For various reasons (to be discussed in the following sections), however, the future-proof condition of such schemes is at stake.

3.2 Advancements in classical cryptanalysis

The first threat for the future-proof condition of any cryptosystem refers to potential advancements on cryptanalysis, i.e., on techniques aiming at solving the underlying security problem in a more efficient way (with less processing time, memory, etc.) than originally predicted. Widely-deployed schemes have a long track of academic and industrial scrutiny and therefore one would expect little or no progress on the cryptanalysis techniques targeting such schemes. Yet, the literature has recently shown some interesting and unexpected results that may suggest the opposite.

In [ 65 ], for example, Barbulescu et al. introduced a new quasi-polynomial algorithm to solve the discrete logarithm problem in finite fields of small characteristics. The discrete logarithm problem is the underlying security problem of the Diffie-Hellman Key Exchange [ 66 ], the Digital Signature Algorithm [ 67 ] and their elliptic curve variants (ECDH [ 68 ] and ECDSA [ 67 ], respectively), just to mention a few widely-deployed cryptosystems. This cryptanalytic result is restricted to finite fields of small characteristics, something that represents an important limitation to attack real-world implementations of the aforementioned schemes. However, any sub-exponential algorithm that solves a longstanding problem should be seen as a relevant indication that the cryptanalysis literature might still be subject to eventual breakthroughs.

This situation should be considered by architects designing UbiComp systems that have long-term security as a requirement. Implementations that support various (i.e. higher than usual) security levels are preferred when compared to fixed, single key size support. The same approach used for keys should be used to other quantities in the scheme that somehow impact on its overall security. In this way, UbiComp systems would be able to consciously accommodate future cryptanalytic advancements or, at the very least, reduce the costs for security upgrades.

3.3 Future disruption due to quantum attacks

Quantum computers are expected to offer dramatic speedups to solve certain computational problems, as foreseen by Daniel R. Simon in his seminal paper on quantum algorithms [ 69 ]. Some of these speedups may enable significant advancements to technologies currently limited by its algorithmic inefficiency [ 70 ]. On the other hand, to our misfortune, some of the affected computational problems are the ones currently being used to secure widely-deployed cryptosystems.

As an example, Lov K. Grover introduced a quantum algorithm [ 71 ] able to find an element in the domain of a function (of size N ) which leads, with high probability, to a desired output in only \(O(\sqrt {N})\) steps. This algorithm can be used to speed up the cryptanalysis of symmetric cryptography. Block ciphers of n bits keys, for example, would offer only n /2 bits of security against a quantum adversary. Hash functions would be affected in ways that depend on the expected security property. In more details, hash functions of n bits digests would offer only n /3 bits of security against collision attacks and n /2 bits of security against pre-image attacks. Table  1 summarizes this assessment. In this context, AES-128 and SHA-256 (collision-resistance) would not meet the minimum acceptable security level of 128-bits (of quantum security). Note that both block ciphers and hash function constructions will still remain secure if longer keys and digest sizes are employed. However, this would lead to important performance challenges. AES-256, for example, is about 40% less efficient than AES-128 (due to the 14 rounds, instead of 10).

Even more critical than the scenario for symmetric cryptography, quantum computers will offer an exponential speedup to attack most of the widely-deployed public-key cryptosystems. This is due to Peter Shor’s algorithm [ 72 ] which can efficiently factor large integers and compute the discrete logarithm of an element in large groups in polynomial time. The impact of this work will be devastating to RSA and ECC-based schemes as increasing the key sizes would not suffice: they will need to be completely replaced.

In the field of quantum resistant public-key cryptosystems, i.e. alternative public key schemes that can withstand quantum attacks, several challenges need to be addressed. The first one refers to establishing a consensus in both academia and industry on how to defeat quantum attacks. In particular, there are two main techniques considered as capable to withstand quantum attacks, namely: post-quantum cryptography (PQC) and quantum cryptography (QC). The former is based on different computational problems believed to be so hard that not even quantum computers would be able to tackle them. One important benefit of PQC schemes is that they can be implemented and deployed in the computers currently available [ 73 – 77 ]. The latter (QC) depends on the existence and deployment of a quantum infrastructure, and is restricted to key-exchange purposes [ 78 ]. The limited capabilities and the very high costs for deploying quantum infrastructure should eventually lead to a consensus towards the post-quantum cryptography trend.

There are several PQC schemes available in the literature. Hash-Based Signatures (HBS), for example, are the most accredited solutions for digital signatures. The most modern constructions [ 76 , 77 ] represent improvements of the Merkle signature scheme [ 74 ]. One important benefit of HBS is that their security relies solely on certain well-known properties of hash functions (thus they are secure against quantum attacks, assuming appropriate digest sizes are used). Regarding other security features, such as key exchange and asymmetric encryption, the academic and industry communities have not reached a consensus yet, although both code-based and lattice-based cryptography literatures have already presented promising schemes [ 79 – 85 ]. Isogeny-based cryptography [ 86 ] is a much more recent approach that enjoys certain practical benefits (such as fairly small public key sizes [ 87 , 88 ]) although it has just started to benefit from a more comprehensive understanding of its cryptanalysis properties [ 89 ]. Regarding standardization efforts, NIST has recently started a Standardization Process on Post-Quantum Cryptography schemes [ 90 ] which should take at least a few more years to be concluded. The current absence of standards represents an important challenge. In particular, future interoperability problems might arise.

Finally, another challenge in the context of post-quantum public-key cryptosystems refers to potentially new implementation requirements or constraints. As mentioned before, hash-based signatures are very promising post-quantum candidates (given efficiency and security related to hash functions) but also lead to a new set of implementation challenges, such as the task of keeping the scheme state secure. In more details, most HBS schemes have private-keys (their state ) that evolve along the time. If rigid state management policies are not in place, a signer can re-utilize the same private-key twice, something that would void the security guarantees offered by the scheme. Recently, initial works to address these new implementation challenges have appeared in the literature [ 91 ]. A recently introduced HBS construction [ 92 ] showed how to get rid of the state management issue at the price of much larger signatures. These examples indicate potentially new implementation challenges for PQC schemes that must be addressed by UbiComp systems architects.

4 Cryptographic engineering

UbiComp systems involve building blocks of very different natures: hardware components such as sensors and actuators, embedded software implementing communication protocols and interface with cloud providers, and ultimately operational procedures and other human factors. As a result, pervasive systems have a large attack surface that must be protected using a combination of techniques.

Cryptography is a fundamental part of any modern computing system, but unlikely to be the weakest component in its attack surface. Networking protocols, input parsing routines and even interface code with cryptographic mechanisms are components much more likely to be vulnerable to exploitation. However, a successful attack on cryptographic security properties is usually disastrous due to the risk concentrated in cryptographic primitives. For example, violations of confidentiality may cause massive data breaches involving sensitive information. Adversarial interference on communication integrity may allow command injection attacks that deviate from the specified behavior. Availability is crucial to keep the system accessible by legitimate users and to guarantee continuous service provisioning, thus cryptographic mechanisms must also be lightweight to minimize potential for abuse by attackers.

Physical access by adversaries to portions of the attack surface is a particularly challenging aspect of deploying cryptography in UbiComp systems. By assumption, adversaries can recover long-term secrets and credentials that provide some control over a (hopefully small) portion of the system. Below we will explore some of the main challenges in deploying cryptographic mechanisms for pervasive systems, including how to manage keys and realize efficient and secure implementation of cryptography.

4.1 Key management

UbiComp systems are by definition heterogeneous platforms, connecting devices of massively different computation and storage power. Designing a cryptographic architecture for any heterogeneous system requires assigning clearly defined roles and corresponding security properties for the tasks under responsibility of each entity in the system. Resource-constrained devices should receive less computationally intensive tasks, and their lack of tamper-resistance protections indicate that long-term secrets should not reside in these devices. More critical tasks involving expensive public-key cryptography should be delegated to more powerful nodes. A careful trade-off between security properties, functionality and cryptographic primitives must then be addressed per device or class of devices [ 93 ], following a set of guidelines for pervasive systems:

Functionality: key management protocols must manage lifetime of cryptographic keys and ensure accessibility to the currently authorized users, but handling key management and authorization separately may increase complexity and vulnerabilities. A promising way of combining the two services into a cryptographically-enforced access control framework is attribute-based encryption [ 94 , 95 ], where keys have sets of capabilities and attributes that can be authorized and revoked on demand.

Communication: components should minimize the amount of communication, at risk of being unable to operate if communication is disrupted. Non-interactive approaches for key distribution [ 96 ] are recommended here, but advanced protocols based on bilinear pairings should be avoided due to recent advances on solving the discrete log problem (in the so called medium prime case [ 97 ]). These advances forcedly increase the parameter sizes, reduce performance/scalability and may be improved further, favoring more traditional forms of asymmetric cryptography.

Efficiency: protocols should be lightweight and easy to implement, mandating that traditional public key infrastructures (PKIs) and expensive certificate handling operations are restricted to the more powerful and connected nodes in the architecture. Alternative models supporting implicit certification include identity-based [ 98 ] (IBC) and certificate-less cryptography [ 99 ] (CLPKC), the former implying inherent key escrow. The difficulties with key revocation still impose obstacles for their wide adoption, despite progress [ 100 ]. A lightweight pairing and escrow-less authenticated key agreement based on an efficient key exchange protocol and implicit certificates combines the advantages of the two approaches, providing high performance while saving bandwidth [ 101 ].

Interoperability: pervasive systems are composed of components originating from different manufacturers. Supporting a cross-domain authentication and authorization framework is crucial for interoperability [ 102 ].

Cryptographic primitives involved in joint functionality must then be compatible with all endpoints and respect the constraints of the less powerful devices.

4.2 Lightweight cryptography

The emergence of huge collections of interconnected devices in UbiComp motivate the development of novel cryptographic primitives, under the moniker lightweight cryptography . The term lightweight does not imply weaker cryptography, but application-tailored cryptography that is especially designed to be efficient in terms of resource consumption such as processor cycles, energy and memory footprint [ 103 ]. Lightweight designs aim to target common security requirements for cryptography but may adopt less conservative choices or more recent building blocks.

As a first example, many new block ciphers were proposed as lightweight alternatives to the Advanced Encryption Standard (AES) [ 104 ]. Important constructions are LS-Designs [ 105 ], modern ARX and Feistel networks [ 106 ], and substitution-permutation networks [ 107 , 108 ]. A notable candidate is the PRESENT block cipher, with a 10-year maturity of resisting cryptanalytic attempts [ 109 ], and whose performance recently became competitive in software [ 110 ].

In the case of hash functions, a design may even trade-off advanced security properties (such as collision resistance) for simplicity in some scenarios. A clear case is the construction of short Message Authentication Codes (MAC) from non-collision resistant hash functions, such as in SipHash [ 111 ], or digital signatures from short-input hash functions [ 112 ]. In conventional applications, BLAKE2 [ 113 ] is a stronger drop-in replacement to recently cryptanalyzed standards [ 114 ] and faster in software than the recently published SHA-3 standard [ 115 ].

Another trend is to provide confidentiality and authentication in a single step, through Authenticated Encryption with Associated Data (AEAD). This can be implemented with a block cipher operation mode (like GCM [ 116 ]) or a dedicated design. The CAESAR competition Footnote 1 selected new AEAD algorithms for standardization across multiple use cases, such as lightweight and high-performance applications and a defense-in-depth setting. NIST has followed through and started its own standardization process for lightweight AEAD algorithms and hash functions Footnote 2 .

In terms of public-key cryptography, Elliptic Curve Cryptography (ECC) [ 63 , 117 ] continues to be the main contender in the space against factoring-based cryptosystems [ 62 ], due to an underlying problem conjectured to be fully exponential in classical computers. Modern instantiations of ECC enjoy high performance and implementation simplicity and are very suited for embedded systems [ 118 – 120 ]. The dominance of number-theoretic primitives is however threatened by quantum computers as described in Section 3 .

The plethora of new primitives must be rigorously evaluated from both the security and performance point of views, involving both theoretical work and engineering aspects. Implementations are expected to consume smaller amounts of energy [ 121 ], cycles and memory [ 122 ] in ever decreasing devices and under more invasive attacks.

4.3 Side-channel resistance

If implemented without care, an otherwise secure cryptographic algorithm or protocol can leak critical information which may be useful to an attacker. Side-channel attacks [ 123 ] are a significant threat against cryptography and may use timing information, cache latency, power and electromagnetic emanations to recover secret material. These attacks emerge from the interaction between the implementation and underlying computer architecture and represent an intrinsic security problem to pervasive computing environments, since the attacker is assumed to have physical access to at least some of the legitimate devices.

Protecting against intrusive side-channel attacks is a challenging research problem, and countermeasures typically promote some degree of regularity in computation. Isochronous or constant time implementations were among the first strategies to tackle this problem in the case of variances in execution time or latency in the memory hierarchy. The application of formal methods has enabled the first tools to verify isochronicity of implementations, such as information flow analysis [ 124 ] and program transformations [ 125 ].

While there is a recent trend towards constructing and standardizing cryptographic algorithms with some embedded resistance against the simpler timing and power analysis attacks [ 105 ], more powerful attacks such as differential power analysis [ 126 ] or fault attacks [ 127 ] are very hard to prevent or mitigate. Fault injection became a much more powerful attack methodology it was after demonstrated in software [ 128 ].

Masking techniques [ 129 ] are frequently investigated as a countermeasure to decorrelate leaked information from secret data, but frequently require robust entropy sources to achieve their goal. Randomness recycling techniques have been useful as a heuristic, but formal security analysis of such approaches is an open problem [ 130 ]. Modifications in the underlying architecture in terms of instruction set extensions, simplified execution environments and transactional mechanisms for restarting faulty computation are another promising research direction but may involve radical and possibly cost-prohibitive changes to current hardware.

5 Resilience

UbiComp relies on essential services as connectivity, routing and end-to-end communication. Advances in those essential services make possible the envisioned Weiser’s pervasive applications, which can count on transparent communication while reaching the expectations and requirements of final users in their daily activities. Among user’s expectations and requirements, the availability of services – not only communication services, but all services provided to users by UbiComp – is a paramount. Users more and more expect, and pay, for 24/7 available services. This is even more relevant when we think about critical UbiComp systems, such as those related to healthcare, urgency, and vehicular embedded systems.

Resilience is highlighted in this article, because it is one of the pillars of security. Resilience aims at identifying, preventing, detecting and responding to process or technological failures to recover or mitigate damages and financial losses resulted from service unavailability [ 131 ]. In general, service unavailability has been associated with non-intentional failures, however, more and more the intentional exploitation of service availability breaches is becoming disruptive and out of control, as seen in the latest Distributed Denial of Service (DDoS) attack against the company DYN, a leading DNS provider, and the DDoS attack against the company OVH, the French website hosting giant [ 132 , 133 ]. The latter reached an intense volume of malicious traffic of approximately 1 TB/s, generated from a large amount of geographically distributed and infected devices, such as printers, IP cameras, residential gateways and baby monitors. Those devices are directly related to the modern concept of UbiComp systems [ 134 ] and they intend to provide ubiquitous services to users.

However, what attracts the most the attention here is the negative side effect of the ubiquity exploitation against service availability. It is fact today that the Mark Weiser’s idea of Computer for the 21st Century has open doors to new kind of highly disruptive attacks. Those attacks are in general based on the idea of invisibility and unawareness for the devices in our homes, works, cities, and countries. But, exactly because of this, people seems to not pay enough attention to basic practices, such as change default passwords in Internet connect devices as CCTV cameras, baby monitors, smart TVs and other. This simple fact has been pointed as the main cause of the two DDoS attacks mentioned before and a report by global professional services company Deloitte suggests that Distributed Denial of Service (DDoS) attacks, that compromise exactly service availability, increased in size and scale in 2017, thanks in part to the growing multiverse of connected things Footnote 3 . They also mentioned that DDoS attacks will be more frequent, with an estimated 10 million attacks in few months.

As there is no guarantee to completely avoid these attacks, resilient solutions become a way to mitigate damages and quickly resume the availability of services. Resilience is then necessary and complementary to the other solutions we observe in the previous sections of this article. Hence, this section focuses on highlighting the importance of resilience in the context of UbiComp systems. We overview the state-of-the-art regarding to resilience in the UbiComp systems and point out future directions for research and innovation [ 135 – 138 ]. We also understand that resilience in these systems still requires a lot of investigations, however we believe that it was our role to raise this point to discussion through this article.

In order to contextualize resilience in the scope of UbiComp, it is important to observe that improvements on information and communication technologies, such as wireless networking, have increased the use of distributed systems in our everyday lives. Network access is becoming ubiquitous through portable devices and wireless communications, making people more and more dependent on them. This raising dependence claims for simultaneous high level of reliability and availability. The current networks are composed of heterogeneous portable devices, communicating among themselves generally in a wireless multi-hop manner [ 139 ]. These wireless networks can autonomously adapt to changes in their environment such as device position, traffic pattern and interference. Each device can dynamically reconfigure its topology, coverage and channel allocation in accordance with changes.

UbiComp poses nontrivial challenges to resilience design due to the characteristics of the current networks, such as shared wireless medium, highly dynamic network topology, multi-hop communication and low physical protection of portable devices [ 140 , 141 ]. Moreover, the absence of central entities in different scenarios increases the complexity of resilience management, particularly, when it is associated with access control, node authentication and cryptographic key distribution.

Network characteristics, as well as constraints on other kind of solutions against attacks that disrupt service availability, reinforce the fact that no network is totally immune to attacks and intrusions. Therefore, new approaches are required to promote the availability of network services. Such requirements motivate the design of resilient network services. In this work, we focus on the delivery of data from one UbiComp device to another as a fundamental network functionality and we emphasize three essential services: physical and link-layer connectivity, routing and end-to-end logical communication. However, resilience has also been observed under other perspectives. We follow the claim that resilience is achieved upon a cross-layer security solution that integrates preventive (i.e., cryptography and access control), reactive (i.e., intrusion detection systems) and tolerant (i.e., packet redundancy) defense lines in a self-adaptive and coordinated way [ 131 , 142 ].

However, what are still the open challenges to achieve resilience in the UbiComp context? First of all, we emphasize the heterogeneity of devices and technologies that compose UbiComp environments. The integration from large-scale systems, such as Cloud data centers, to tiny devices, such as wearable and implantable sensors, is a huge challenge itself due to the complexity resulted from it. Then, in addition, providing integration of preventive, reactive and tolerant solutions and their adaptation is even harder in face of the different requirements of these devices, their capabilities in terms of memory and processing, and application requirements. Further, dealing with heterogeneity in terms of communication technology and protocols makes challenging the analysis of network behavior and topologies, what in conventional systems are employed to assist in the design of resilient solutions.

Another challenge is how to deal with scale. First, the UbiComp systems tend to be hyper-scale and geographically distributed. How to cope, then, with the complexity resulted from that? How to define and construct models to understand these systems and offer resilient services? Finally, we also point out as challenges the uncertainty and speed. If on the one hand, it is so hard to model, analyze and define resilient services in this complex system, on the other hand uncertainly is a norm on them, being speed and low response time a strong requirement for the applications in these systems. Hence, how to address all these elements together? How to manage them in order to offer resilient services considering diverse kind of requirements from the various applications?

All these questions lead to deep investigation and challenges. However, they also show opportunities for applied research in designing and engineering resilient systems, mainly for the UbiComp context. Particularly, if we advocate for designing resilient systems that manage the three defense lines in an adaptive way. We believe that this management can promote a great advance for applied research and for resilience.

6 Identity management

Authentication and Authorization Infrastructure (AAI) is the central element for providing security in distributed applications. AAI is a way to fulfill the security requirements in UbiComp systems. It is possible to provide identity management with this infrastructure to prevent legitimate or illegitimate users/devices to access non-authorized resources. IdM can be defined as a set of processes, technologies and policies used for assurance of identity information (e.g., identifiers, credentials, attributes), assurance of the identity of an entity (e.g., users, devices, systems), and enabling businesses and security applications [ 143 ]. Thus, IdM allows these identities to be used for authentication, authorization and auditing mechanisms [ 144 ]. A proper identity management approach is necessary for pervasive computing to be invisible to users [ 145 ]. Figure  4 provides an overview of the topics discussed in this section.

figure 4

Pervasive IdM Challenges

According to [ 143 ], electronic identity (eID) comprises a set of data about an entity that is sufficient to identify that entity in a particular digital context. An eID may be comprised of:

Identifier - a series of digits, characters and symbols or any other form of data used to uniquely identify an entity (e.g., UserID, e-mail addresses, URI and IP addresses). IoT requires a global unique identifier for each entity in the network;

Credentials - an identifiable object that can be used to authenticate the claimant (e.g., digital certificates, keys, tokens and biometrics);

Attributes - descriptive information bound to an entity that specifies its characteristics.

In UbiComp systems, identity has both a digital and a physical component. Some entities might have only an online or physical representation, whereas others might have a presence in both planes. IdM requires relationships not only between entities in the same planes but also across them [ 145 ].

6.1 Identity management system

An IdM system deals with the lifecycle of an identity, which consists of registration, storage, retrieval, provisioning and revocation of identity attributes [ 146 ]. Note that the management of devices’ identify lifecycle is more complicated than people’s identity lifecycle due to the complexity of operational phases of a device (i.e., from the manufacturing to the removed and re-commissioned) in the context of a given application or use case [ 102 , 147 ].

For example, consider a given device life-cycle. In the pre-deployment, some cryptographic material is loaded into the device during its manufacturing process. Next, the owner of the device purchases it and gets a PIN that grants the owner the initial access to the device. The device is later installed and commissioned within a network by an installer during the bootstrapping phase. The device identity and the secret keys used during normal operation are provided to the device during this phase. After being bootstrapped, the device is in operational mode. During this operational phase, the device will need to prove its identity (D2D communication) and to control the access to its resources/data. For devices with lifetimes spanning several years, maintenance cycles should be required. During each maintenance phase, the software on the device can be upgraded, or applications (running on the device) can be reconfigured. The device continues to loop through the operational phase until the device is decommissioned at the end of its lifecycle. Furthermore, the device can also be removed and re-commissioned to be used in a different system under a different owner thereby starting the lifecycle all over again. During this phase, the cryptographic material held by the device is wiped, and the owner is unbound from the device [ 147 ].

An IdM system involves two main entities: identity provider (IdP - responsible for authentication and user/device information management in a domain) and service provider (SP - also known as relying party, which provides services to user/device based on their attributes). The arrangement of these entities in an IdM system and the way in which they interact with each other characterize the IdM models, which can be traditional (isolated or silo), centralized, federated or user-centric [ 146 ].

In traditional model, IdP and SP are grouped into a single entity whose role is to authenticate and control access to their users or devices without relying on any other entity. In this model, the providers do not have any mechanisms to share this identity information with other organizations/entities. This makes the identity provisioning cumbersome for the end user or device, since the users and devices need to proliferate their sensitive data to different providers [ 146 , 148 ].

The centralized model emerged as a possible solution to avoid the redundancies and inconsistencies in the traditional model and to give the user/device a seamless experience. Here, a central IdP became responsible for collecting and provisioning the user’s or device’s identity information in a manner that enforced the preferences of the user/device. The centralized model allows the sharing of identities among SPs and provides Single Sign-On (SSO). This model has several drawbacks as the IdP not only becomes a single point of failure but also may not be trusted by all users, devices and service providers [ 146 ]. In addition, a centralized IdP must provide different mechanisms to authenticate either users or autonomous devices to be adequate with UbiComp system requirements [ 149 ].

UbiComp systems are composed of heterogeneous devices that need to prove their authenticity to the entities they communicate with. One of the problems in this scenario is the possibility of devices being located in different security domains using distinct authentication mechanisms. An approach for providing IdM in a scenario with multiple security domains is through an AAI that uses the federated IdM model (FIM) [ 150 , 151 ]. In a federation, trust relationships are established among IdPs and SPs to enable the exchange of identity information and service sharing. Existing trust relationships guarantee that users/devices authenticated in home IdP may access protected resources provided by SPs from other federation security domains [ 148 ]. Single Sign-On (SSO) is obtained when the same authentication event can be used to access different federated services [ 146 ].

Considering the user authentication perspective, the negative points of the centralized and federated models focus primarily on the IdP, as it has full control over the user’s data [ 148 ]. Besides, the user depends on an online IdP to provide the required credentials. In the federated model, users cannot guarantee that their information will not be disclosed to third parties without the users’ consent [ 146 ].

The user-centric model provides the user full control over transactions involving his or her identity data [ 148 ]. In the user-centric model, the user identity can be stored on a Personal Authentication Device, such as, a smartphone or a smartcard. Users have the freedom to choose the IdPs which will be used and to control the personal information disclosed to SPs. In this model, the IdPs continue acting as a trusted third party between users and SPs. However, IdPs act according to the user’s preferences [ 152 ]. The major drawback of the user-centric model is that it is not able to handle delegations. Several solutions that adopted this model combine it with FIM or centralized model, however, novel solutions prefer federated model.

6.1.1 Authentication

User and device authentication within an integrated authentication infrastructure (IdP is responsible for user and device authentication) might use a centralized IdM model [ 149 , 153 ] or a traditional model [ 154 ]. Other works [ 155 – 157 ] proposed AAIs for IoT using the federated model, however, only for user authentication and not for device authentication. Kim et al. [ 158 ] proposes a centralized solution that enables the use of different authentication mechanisms for devices that are chosen based on device energy autonomy. However, user authentication is not provided.

Based on the traditional model, an AAI composed by a suite of protocols that incorporate authentication and access control during the entire IoT device lifecycle is proposed in [ 102 ]. Domenech et al. [ 151 ] proposes an AAI for the Web of Things, which is based on the federated IdM model (FIM) and enables SSO for users and devices. In this AAI, IdPs may be implemented as a service in a Cloud (IdPaaS - Identity Provider as a Service) or on premise. Some IoT platforms provide IdPaaS to user and device authentication such as Amazon Web Services (AWS) IoT, Microsoft Azure IoT, Google Cloud IoT platform.

Authentication mechanisms and protocols consume computational resources. Thus, to integrate an AAI into a resource constrained embedded device can be a challenge. As mentioned in Section 4.2 , a set of lightweight cryptographic algorithms, which do not impose certificate-related overheads on devices, can be used to provide device authentication in UbiComp systems. There is a recent trend that investigates the benefits of using identity-based (IBC) cryptography to provide cross-domain authentication for constrained devices [ 102 , 151 , 159 ]. However, some IoT platforms still provide certificate-based device authentication such as Azure IoT, WSO2 or per-device public/private key authentication (RSA and Elliptic Curve algorithms) using JSON Web Tokens such as Google Cloud IoT Platform and WSO2.

Identity theft is the fastest growing crime in recent years. Currently, password-based credentials are the most used by user authentication mechanisms, despite of their weaknesses [ 160 ]. There are multiple opportunities for impersonation and other attacks that fraudulently claim another subject’s identity [ 161 ]. Multi-factor authentication (MFA) is a solution created to improve the authentication process robustness and it generally combines two or more authentication factors ( something you know , something you have , and something you are ) for successful authentication [ 161 ]. In this type of authentication, an attacker needs to compromise two or more factors which makes the task more complex. Several IdPs and SPs already offer MFA to authenticate its users, however, device authentication is still an open question.

6.1.2 Authorization

In UbiComp system, a security domain can have client devices and SPs devices (SP embedded). In this context, physical devices and online providers can offer services. Devices join and leave, SPs appear and disappear, and access control must adapt itself to maintain the user perception of being continuously and automatically authenticated [ 145 ]. The data access control provided by AAI embedded in the device is also a significant requirement. Since these devices are cyber-physical systems (CPS), a security threat against these can likely impact the physical world. Thus, if a device is improperly accessed, there is a chance that this violation will affect the physical world risking people’s well-being and even their lives [ 151 ].

Physical access control systems (PACS) provide access control to physical resources, such as buildings, offices or any other protected areas. Current commercial PACS are based on traditional IdM model and usually use low-cost devices such as smart cards. However, there is a trend to threat PACS as a (IT) service, i.e. unified physical and digital access [ 162 ]. Considering IoT scenarios, the translation of SSO authentication credentials for PACS across multiple domains (in a federation), is also a challenge due to interoperability, assurance and privacy concerns.

In the context of IoT, authorization mechanisms are based on access control models used in classic Internet such as Discretionary model, for example Access Control List (ACL) [ 163 ]), Capability Based Access Control (CapBAC) [ 164 , 165 ], Role Based Access Control (RBAC) [ 156 , 166 , 167 ] and Attribute Based Access Control (ABAC) [ 102 , 168 , 169 ]. ABAC and RBAC are the models better aligned to federated IdM and UbiComp systems. As proposed in [ 151 ], an IdM system that supports different access control models, such as RBAC and ABAC, can more easily adapt to the needs of the administration processes in the context of UbiComp.

Regarding policy management models to access devices, there are two approaches: provisioning [ 151 , 170 ] and outsourcing [ 150 , 151 , 171 , 172 ]. In provisioning, the device is responsible for the authorization decision making, which requires the policy to be in a local base. In this approach, Policy Enforcement Point (PEP), which controls the access to the device, and Policy Decision Point (PDP) are both in the same device. In outsourcing, the decision making takes place outside the device, in a centralized external service, that replies to all policy evaluation requests from all devices (PEPs) of a domain. In this case, the decision making can be offered as a service (PDPaaS) in the cloud or on premise [ 151 ].

For constrained devices, the provisioning approach is robust since it does not depend on an external service. However, in this approach, the decision making and the access policy management can be costly for the device. The outsourcing approach simplifies the policy management, but it has communication overhead and single point of failure (centralized PDP).

6.2 Federated identity management system

The IdM models guide the construction of policies and business processes for IdM systems but do not indicate which protocols or technologies should be adopted. SAML (Security Assertion Markup Language) [ 173 ], OAuth2 [ 174 ] and OpenId Connect specifications stand out in the federated IdM context [ 175 , 176 ] and are adequate for UbiComp systems. SAML, developed by OASIS, is an XML-based framework for describing and exchanging security information between business partners. It defines syntax and rules for requesting, creating, communicating and using SAML Assertions, which enables SSO across domain boundaries. Besides, SAML can describe authentication events that use different authentication mechanisms [ 177 ]. These characteristics are very important for the interoperability between security technologies of different administrative domains to be accomplished. According to [ 151 , 178 , 179 ], the first step toward achieving interoperability is the adoption of SAML. However, XML-based SAML is not a lightweight standard and has a high computational cost for IoT resource-constrained devices [ 176 ].

Enhanced Client and Proxy (ECP), a SAML profile, defines the security information exchange that involves clients who do not use a web browser and consequently allows device SSO authentication. Nevertheless, ECP requires SOAP protocol, which is not suitable due to its high computational cost [ 180 ]. Presumably, due to its computational cost, this profile is still not widely used in IoT devices.

OpenID Connect (OIDC) is an open framework that adopts user-centric and federated IdM models. It is decentralized, which means no central authority approves or registers SPs. With OpenID, an user can choose the OpenID Provider (IdP) he or she wants to use. OpenID Connect is a simple identity layer on top of the OAuth 2.0 protocol. It allows Clients (SPs) to verify user or device identity based on the authentication performed by an Authorization Server (OpenID Provider), as well as to obtain basic profile information about the user or device in an interoperable and REST-like manner [ 181 ]. OIDC uses JSON-based security token (JWT) that enables identity and security information to be shared across security domains, consequently it is a lightweight standard and suitable for IoT. Nevertheless, it is a developing standard that requires more time and enterprise acceptance to become a established standard [ 176 ].

An IoT architecture based on OpenID, which treats authentication and access control in a federated environment was proposed in [ 156 ]. Devices and users may register at a trusted third party of the home domain, which helps the user’s authentication process. In [ 182 ], OpenId connect is used for authentication and authorization of users and devices and to establish trust relationships among entities in an ambient assisted living environment (medical devices acting as a SP), in a federated approach.

SAML and OIDC are used for user authentication in Cloud platforms (Google, AWS, Azure). FIWARE platform Footnote 4 (an open source IoT platform), via Keyrock Identity Management Generic Enabler, which brings support to SAML and OAuth2-based for authentication of users. However, platforms usually use certification-based or token-based certification for device authentication using a centralized or traditional model. In future works, it may be interesting to perform practical investigations on SAML (ECP profile with different lightweight authentication mechanisms) and OIDC for various types of IoT devices and cross-domain scenarios and compare them with current authentication solutions.

OAuth protocol Footnote 5 is an open authorization framework that allows an user/ application to delegate Web resources to a third-party without sharing its credentials. With OAuth protocol it is possible to use a Json Web Token or a SAML assertion as a means for requesting an OAuth 2.0 access token as well as for client authentication [ 176 ]. Fremantle et al. [ 150 ] discusses the use of OAuth for IoT applications that use MQTT protocol, which is a lightweight message queue protocol (publish/subscribe model) for small sensors and mobile devices.

A known standard for authorization in distributed systems is XACML (eXtensible Access Control Markup Language). XACML is a language based on XML for authorization policy description and request/response for access control decisions. Authorization decisions may be based on user/device attributes, on requested actions, and environment characteristics. Such features enable the building of flexible authorization mechanisms. Furthermore, XACML is generic, regardless of the access control model used (RBAC, ABAC) and enables the use of a local authorization decision making (provisioning model) or by an external service provider (outsourcing model). Another important aspect is that there are profiles and extensions that provide interoperability between XACML and SAML [ 183 ].

6.3 Pervasive IdM challenges

Current federation technologies rely on preconfigured static agreements, which are not well-suited for the open environments in UbiComp scenarios. These limitations negatively impact scalability and flexibility [ 145 ]. Trust establishment is the key for scalability. Although FIM protocols can cover security aspects, dynamic trust relationship establishment are open question [ 145 ]. Some requirements, such as usability, device authentication and the use of lightweight cryptography, were not properly considered in Federated IdM solutions for UbiComp systems.

Interoperability is another key requirement for successful IdM system. UbiComp systems integrates heterogeneous devices that interact with humans, systems in the Internet, and with other devices, which leads to interoperability concerns. These systems can be formed by heterogeneous domains (organizations) that go beyond the barriers of a Federation with the same AAI. The interoperability between federations that use different federated identity protocols (SAML, OpenId and OAuth) is still a problem and also a research opportunity.

Lastly, IdM systems for UbiComp systems must appropriately protect user information and adopt proper personal data protection policies. Section 7 discusses the challenges to provide privacy in UbiComp systems.

7 Privacy implications

UbiComp systems tend to collect a lot of data and generate a lot of information. Correctly used, information generates innumerable benefits to our society that has provided us with a better life over the years. However, the information can be used for illicit purposes, just as computer systems are used for attacks. Protecting private information is a great challenge that can often seem impractical, for instance, protecting customers’ electrical consumption data from their electricity distribution company [ 184 – 186 ].

Ensuring security is a necessary condition for ensuring privacy, for instance, if the communication between clients and a service provider is not secure, then privacy is not guaranteed. However, it is not a sufficient condition, for instance, the communication is secure, but a service provider uses the data in a not allowed way. We can use cryptography to ensure secure as well as privacy. Nevertheless, even though one uses encrypted communication, the metadata from the network traffic might reveal private information. The first challenge is to find the extend of the data relevance and the impact of data leakage.

7.1 Application scenario challenges

Finding which data might be sensitive is a challenging task. Some cultures classify some data as sensitive when others classify the same data as public. Another challenge is to handle regulations from different countries.

7.1.1 Identifying sensitive data

Classifying what may be sensitive data might be a challenging task. The article 12 of the Universal Declaration of Human Rights proclaimed by the United Nations General Assembly in Paris on 10 December 1948 states: No one shall be subjected to arbitrary interference with his privacy, family, home, or correspondence, nor to attacks upon his honor and reputation. Everyone has the right to the protection of the law against such interference or attacks. Lawmakers have improved privacy laws around the world. However, there is still plenty of room for improvements, specially, when we consider data from people, animals, and products. Providers can use such data to profile and manipulate people and market. Unfair competitors might use private industrial data to get advantages over other industries.

7.1.2 Regulation

UbiComp systems tend to run worldwide. Thus, their developers need to deal with several laws from distinct cultures. The abundance of laws is a challenge for international institutions. The absence of laws too. On the one hand, the excess of laws compels institutions to handle a huge bureaucracy to follow several laws. On the other hand, the absence of laws causes unfair competition because unethical companies can use private data to get advantages over ethical companies. Business models must use privacy-preserving protocols to ensure democracy and avoid a surveillance society (see [ 187 ]). Such protocols are the solution for the dilemma between privacy and information. However, they have their own technological challenges.

7.2 Technological challenges

We can deal with already collected data from legacy systems or private-by-design data that are collected by privacy-preserving protocols, for instance, databases used in old systems and messages from privacy-preserving protocols, respectively. If a scenario can be classified as both, we can just tackle it as an already collected data in the short term.

7.3 Already collected data

One may use a dataset for information retrieval while keeping the anonymity of the true owners’ data. One may use data mining techniques over a private dataset. Several techniques are used in privacy preserving data mining [ 188 ]. ARX Data Anonymization Tool Footnote 6 is a very interesting tool for anonymization of already collected data. In the following, we present several techniques used to provide privacy in already collected data.

7.3.1 Anonymization

Currently, we have several techniques for anonymization and to evaluate the level of anonymization, for instance, k -anonymity, l -diversity, and t -closeness [ 189 ]. They use a set E from data indistinguishable for an identifier in a table.

The method k -anonymity suppresses table columns or replace them for keeping each E with at least k registers. It seems safe, but only 4 points marking the position on the time are enough to identify uniquely 95% of the cellphone users [ 190 ].

The method l -diversity requires that each E have at least l values “well-represented” for each sensitive column. Well-represented can be defined in three ways:

at least l distinct values for each sensitive column;

for each E , the Shannon entropy is limited, such that \(H(E)\geqslant \log _{2} l\) , where \(H(E)=-\sum _{s\in S}\Pr (E,s)\log _{2}(\Pr (E,s)),\) S is the domain of the sensitive column, and Pr( E , s ) is the probability of the lines in E that have sensitive values s ;

the most common values cannot appear frequently, and the most uncommon values cannot appear infrequently.

Note that some tables do not have l distinct sensitive values. Furthermore, the table entropy should be at least log2 l . Moreover, the frequency of common and uncommand values usually are not close to each other.

We say that E is t -closeness if the distance between the distribution of a sensitive column E end the distribution of column in all the table is not more than a threshold t . Thus, we say that a table has t -closeness if every E in a table have t -closeness. In this case, the method generates a trade-off between data usefulness and privacy.

7.3.2 Differential privacy

The idea of differential privacy is similar to the idea of indistinguishability in cryptography. For defining it, let ε be a positive real number and \(\mathcal {A}\) be a probabilistic algorithm with a dataset as input. We say that \(\mathcal {A}\) is ε -differentially private if for every dataset D 1 and D 2 that differ in one element, and for every subset S of the image of \(\mathcal {A}\) , we have \(\Pr \left [{\mathcal {A}}\left (D_{1}\right)\in S\right ]\leq e^{\epsilon }\times \Pr \left [{\mathcal {A}}\left (D_{2}\right)\in ~S\right ],\) where the probability is controlled for the algorithm randomness.

Differential privacy is not a metric in the mathematical sense. However, if the algorithms keep the probabilities based on the input, we can construct a metric d to compare the distance between two algorithms with \(d\left (\mathcal {A}_{1},\mathcal {A}_{2}\right)=|\epsilon _{1}-\epsilon _{2}|.\) In this way, we can determine if two algorithms as equivalents ε 1 = ε 2 , and we can determine the distance from an ideal algorithm computing

7.3.3 Entropy and the degree of anonymity

The degree of anonymity g can be measured with the Shannon entropy \(H(X)=\sum _{{i=1}}^{{N}}\left [p_{i}\cdot \log _{2} \left ({\frac {1}{p_{i}}}\right)\right ],\) where H ( X ) is the network entropy, N is the number of nodes, and p i is the probability for each node i . The maximal entropy happens when the probability is uniform, i.e., all nodes are equiprobably 1/ N , hence H M = log2( N ). Therefore, the anonymity degree g is defined by \(g=1-{\frac {H_{M}-H(X)}{H_{M}}}={\frac {H(X)}{H_{M}}}.\)

Similar to differential privacy, we can construct a metric to compare the distance between two networks computing d ( g 1 , g 2 )=| g 1 − g 2 |. Similarly, we can compare if they are equivalent g 1 = g 2 . Thus, we can determine the distance from an ideal anonymity network computing d ( g 1 , g ideal )=| g 1 −1|.

The network can be replaced by a dataset, but in this model, each register should have a probability.

7.3.4 Complexity

Complexity analysis also can be used as a metric to measure the time required in the best case for retrieving information from an anonymized dataset. It can also be used in private-by-design data as the time required to break a privacy-preserving protocol. The time measure can be done with asymptotical analysis or counting the number of steps to break the method.

All techniques have their advantages and disadvantages. However, even though the complexity prevents the leakage, even though the algorithm has differential privacy, even though the degree of anonymity is the maximum, privacy might be violated. For example, in an election with 3 voters, if 2 collude, then the third voters will have the privacy violated independent of the algorithm used. In [ 191 ], we find how to break protocols based on noise for smart grids, even when they are provided with the property of differential privacy.

Cryptography should ensure privacy in the same way that ensures security. An encrypted message should have maximum privacy metrics as well as cryptography ensures for security. We should use the best algorithm that leaks privacy and compute its worst-case complexity.

7.3.5 Probability

We can use probabilities to measure the chances of leakage. This approach is independent of algorithm used to protect privacy.

For example, consider an election with 3 voters. If 2 voters cast yes and 1 voter cast no, an attacker knows that the probability of a voter cast yes is 2/3 and for no is 1/3. The same logics applies if the number of voters and candidates grow.

Different from the case of yes and no, we may keep the privacy from valued measured. For attackers to discover the time series of three points, they represent each point for a number of stars, i.e., symbols ⋆ . Thus, attackers can split the total number of stars in three boxes. Let the sum of the series be 7, a probability would be ⋆ ⋆ ⋆ ⋆ ⋆ ⋆ ⋆ . For simplicity, attackers can split the stars by bars instead of boxes. Hence, ⋆ ⋆ ⋆ ⋆ | ⋆ | ⋆ ⋆ is the same solution. With such notation, the binomial of 7 stars plus 2 bars chosen 7 stars determines the possible number of solutions, i.e., \( {7+2 \choose 7}=\frac {9!}{7!(9-7)!}=36.\)

Generalizing, if t is the number of points in a time series and s its sum, then the number of possible time series for the attackers to decide the correct is determined by s plus t −1 chosen s , i.e.,

If we collect multiple time series, we can form a table, e.g., a list of candidates with the number of votes by states. The tallyman cold reveal only the total number of voter by state and the total number of votes by candidate, who could infer the possible number of votes by state [ 191 ]. Data from previous elections may help the estimation. The result of the election could be computed over encrypted data in a much more secure way than anonymization by k -anonymity, l -diversity, and t -closeness. Still, depending on the size of the table and its values, the time series can be found.

In general, we can consider measurements instead of values. Anonymity techniques try to reduce the number of measurements in the table. Counterintuitively, smaller the number of measurements, bigger the chances of discover them [ 191 ].

If we consider privacy by design, we do not have already collected data.

7.4 Private-by-design data

Messages is the common word for private-by-design data. Messages are transmitted data, processed, and stored. For privacy-preserving protocols, individual messages should not be leaked. CryptDB Footnote 7 is an interesting tool, which allows us to make queries over encrypted datasets. Although messages are stored in a dataset, they are encrypted messages with the users’ keys. To keep performance reasonable, privacy-preserving protocols aggregate or consolidate messages and solve a specific problem.

7.4.1 Computing all operators

In theory, we can compute a Turin machine over encrypted data, i.e., we can use a technique called fully homomorphic encryption [ 192 ] to compute any operator over encrypted data. The big challenge of fully homomorphic encryption is performance. Hence, constructing a fully homomorphic encryption for many application scenarios is a herculean task. The most usual operation is addition. Thus, most privacy-preserving protocols use additive homomorphic encryption [ 193 ] and DC-Nets (from “Dining Cryptographers”) [ 194 ]. Independent of the operation, the former generates functions, and the latter generates families of functions. We can construct an asymmetric DC-Net based on an additive homomorphic encryption [ 194 ].

7.4.2 Trade-off between enforcement and malleability

The privacy enforcement has a high cost. With DC-Nets, we can enforce privacy. However, every encrypted message need to be considered in the computation for users to decrypt and to access the protocol output. It is good for privacy but bad for fault tolerance. For illustration, consider an election where all voters need to vote. Homomorphic encryption enables protocols to decrypt and output even missing an encrypted message. Indeed, it enables the decryption of a single encrypted message. Therefore, homomorphic encryption cannot ensure privacy. For illustration, consider an election where one can read and change all votes. Homomorphic encryption techniques are malleable, and DC-Nets are non-malleable. On the one hand, mailability simplifies the process and improve fault tolerance but disables privacy enforcement. On the other hand, non-mailability enforces privacy but complicates the process and diminishes fault tolerance. In addition, the key distribution with homomorphic encryption is easier than with DC-Net schemes.

7.4.3 Key distribution

Homomorphic encryption needs a public-private key pair. Who owns the private key controls all the information. Assume that a receiver generates the key pair and send the public key to the senders in a secure communication channel. Thus, senders will use the same key to encrypt their messages. Since homomorphic encryption schemes are probabilistic, sender can use the same key to encrypt the same message that their encrypted messages will be different from each other. However, the receiver does not know who sent the encrypted messages.

DC-Net needs a private key for each user and a public key for the protocol. Since DC-Nets do not require senders and receiver, the users are usually named participants. They generate their own private key. Practical symmetric DC-Nets need that participants send a key to each other in a secure communication channel. Afterward, each participant has a private key given by the list of shared keys. Hence, each participant encrypts computing \(\mathfrak {M}_{i,j}\leftarrow \text {Enc}\left (m_{i,j}\right)=m_{i,j}+\sum _{o\in \mathcal {M}-\{i\}}\, \text {Hash}\left (s_{i,o}\ || \ j\right)-\text {Hash}\left (s_{o,i}\ || \ j\right),\) where m i , j is the message sent by the participant i in the time j , Hash is a secure hash function predefined by the participants, s i , o is the secret key sent from participant i to participant o , similarly, s o , i is the secret key received by i from o , and || is the concatenation operator. Each participant i can send the encrypted message \(\mathfrak {M}_{i,j}\) to each other. Thus, participants can decrypt the aggregated encrypted messages computing \(\text {Dec}=\sum _{i\in \mathcal {M}}\, \mathfrak {M}_{i,j}=\sum _{i\in \mathcal {M}}\, m_{i,j}.\) Note that if one or more messages are missing, the decryption is infeasible. Asymmetric DC-Nets do not require a private key based on shared keys. Each participant simply generates a private key. Subsequently, they use a homomorphic encryption or a symmetric DC-Net to add their private keys generating the decryption key.

Homomorphic encryption schemes have low overhead than DC-Nets for setting up keys and for distributing them. Symmetric DC-Nets need O ( I 2 ) messages to set up the keys, where I is the number of participants. Figure  5 depicts the messages to set up keys using (a) symmetric DC-Nets and (b) homomorphic encryption. Asymmetric DC-Nets can be settled easier than symmetric DC-Nets with the price of trusting the homomorphic encryption scheme.

figure 5

Setting up the keys. a Symmetric DC-Nets b Homomorphic encryption

7.4.4 Aggregation and consolidation

The aggregation and consolidation with DC-Nets are easier than with homomorphic encryption. Using DC-Nets, participants can just broadcast their encrypted messages or just send directly to an aggregator. Using homomorphic encryption, senders cannot send encrypted messages directly to the receiver, who can decrypt individual messages. Somehow, senders should aggregate the encrypted messages, and the receiver should receive only the encrypted aggregation, which is a challenge in homomorphic encryption and trivial in DC-Nets due to the trade-off described in Section 7.4.2 . In this work, we are referencing DC-Nets as fully connected DC-Nets. For non-fully connected DC-Nets, aggregation is based on trust and generates new challenges. Sometimes, aggregation and consolidation are used as synonym. However, consolidation is more complicated and generates more elaborate information than the aggregation. For example, the aggregation of the encrypted textual messages is just to join them, while the consolidation of encrypted textual messages generates a speech synthesis.

7.4.5 Performance

Fully homomorphic encryption tends to have big keys and requires a prohibitive processing time. On the contrary, asymmetric DC-Nets and partially homomorphic encryption normally use modular multi-exponentiations, which can be computed in logarithmic time [ 195 ]. Symmetric DC-Nets are efficient only for a small number of participants, because each participant need an iteration over the number of participants to encrypt a message. The number of participants is not relevant for asymmetric DC-Nets and for homomorphic encryption.

8 Forensics

Digital forensics is a branch of forensic science addressing the recovery and investigation of material found in digital devices. Evidence collection and interpretation play a key role in forensics. Conventional forensic approaches separately address issues related to computer forensics and information forensics. There is, however, a growing trend in security and forensics research that utilizes interdisciplinary approaches to provide a rich set of forensic capabilities to facilitate the authentication of data as well as the access conditions including who, when, where, and how.

In this trend, there are two major types of forensic evidences [ 196 ]. One type is intrinsic to the device, the information processing chain, or the physical environment, in such forms as the special characteristics associated with specific types of hardware or software processing or environment, the unique noise patterns as a signature of a specific device unit, certain regularities or correlations related to certain device, processing or their combinations, and more. Another type is extrinsic approaches, whereby specially designed data are proactively injected into the signals/data or into the physical world and later extracted and examined to infer or verify the hosting data’s origin, integrity, processing history, or capturing environment.

In mid of the convergence between digital and physical systems with sensors, actuators and computing devices becoming closely tied together, an emerging framework has been proposed as Proof-Carrying Sensing (PCS) [ 197 ]. This was inspired by Proof-Carrying Code, a trusted computing framework that associates foreign executables with a model to prove that they have not been tampered with and they function as expected. In the new UbiComp context involving cyber physical systems where mobility and resource constraints are common, the physical world can be leveraged as a channel that encapsulates properties difficult to be tampered with remotely, such as proximity and causality, in order to create a challenge-response function. Such a Proof-Carrying Sensing framework can help authenticate devices, collected data, and locations, and compared to traditional multifactor or out-of-band authentication mechanisms, it has a unique advantage that authentication proofs are embedded in sensor data and can be continuously validated over time and space at without running complicated cryptographic algorithms.

In terms of the above-mentioned intrinsic and extrinsic view point, the physical data available to establish a mutual trust in the PCS framework can be intrinsic to the physical environment (such as temperature, luminosity, noise, electrical frequency), or extrinsic to it, for example, they are actively injected by the device into the physical world. By monitoring the propagation of intrinsic or extrinsic data, a device can confirm its reception by other devices located within its vicinity. The challenge in designing and securely implementing such protocols can be addressed by the synergy of combined expertises such as signal processing, statistical detection and learning, cryptography, software engineering, and electronics.

To help appreciate the intrinsic and extrinsic evidences in addressing the security and forensics in UbiComp that involves both digital and physical elements, we now discuss two examples. Consider first an intrinsic signature of power grids. The electric network frequency (ENF) is the supply frequency of power distribution grids, with a nominal value of 60Hz (North America) or 50Hz (Europe). At any given time, the instantaneous value of ENF usually fluctuates around its nominal value as a result of the dynamic interaction between the load variations in the grid and the control mechanisms for power generation. These variations are nearly identical in all locations of the same grid at a given time due to the interconnected nature of the grid. The changing values of instantaneous ENF over time forms an ENF signal, which can be intrinsically captured by audio/visual recordings (Fig.  6 ) or other sensors [ 198 , 199 ]. This has led to recent forensic applications, such as validating the time-of-recording of an ENF-containing multimedia signal and estimating its recording location using concurrent reference signals from power grids based on the use of ENF signals.

figure 6

An example of intrinsic evidence related to the power grid. Showing here are spectrograms of ENF signals in concurrent recordings of a audio, b visual, and c power main. Cross-correlation study can show the similarity between media and power line reference at different time lags, where a strong peak appears at the temporal alignment of the matching grid

Next, consider the recent work by Satchidanandan and Kumar [ 200 ] introducing a notion of watermarking in a cyber-physical system, which can be viewed as a class of extrinsic signatures. If an actuator injects into the system a properly designed probing signal that is unknown in advance to other nodes in the system, then based on the knowledge of the cyber-physical system’s dynamics and other properties, the actuator can examine the sensors’ report about the signals at various points and can potentially infer whether there is malicious activity in the system or not, and if so, where and how.

A major challenge and research opportunity lies on discovering and characterizing suitable intrinsic and extrinsic evidences. Although qualitative properties of some signatures are known, it is important to develop quantitative models to characterize the normal and abnormal behavior in the context of the overall system. Along this line, the exploration of physical models might yield analytic approximations of such properties; and in the meantime, data-driven learning approaches can be used to gather statistical data characterizing normal and abnormal behaviors. Building on these elements, a strong synergy across the boundaries of traditionally separate domains of computer forensics, information forensics, and device forensics should be developed so as to achieve comprehensive capabilities of system forensics in UbiComp.

9 Conclusion

In the words of Mark Weiser, Ubiquitous Computing is “the idea of integrating computers seamlessly into the world at large” [ 1 ]. Thus, far from being a phenomenon from this time, the design and practice of UbiComp systems were already being discussed one quarter of a century ago. In this article, we have revisited this notion, which permeates the most varied levels of our society, under a security and privacy point of view. In the coming years, these two topics will occupy much of the time of researchers and engineers. In our opinion, the use of this time should be guided by a few observations, which we list below:

UbiComp software is often produced as the combination of different programming languages, sharing a common core often implemented in a type-unsafe language such as C, C++ or assembly. Applications built in this domain tend to be distributed, and their analysis, i.e., via static analysis tools, needs to consider a holistic view of the system.

The long-life span of some of these systems, coupled with the difficulty (both operational and cost-wise) to update and re-deploy them, makes them vulnerable to the inexorable progress of technology and cryptanalysis techniques. This brings new (and possibly disruptive) players to this discussion, such as quantum adversaries.

Key management is a critical component of any secure or private real-world system. After security roles and key management procedures are clearly defined for all entities in the framework, a set of matching cryptographic primitives must be deployed. Physical access and constrained resources complicate the design of efficient and secure cryptographic algorithms, which are often amenable to side-channel attacks. Hence, current research challenges in the space include more efficient key management schemes, in particular supporting some form of revocation; the design of lightweight cryptographic primitives which facilitate correct and secure implementation; cheaper side-channel resistance countermeasures made available through advances in algorithms and embedded architectures.

Given the increasing popularization of UbiComp systems, people become more and more dependent on their services for performing different commercial, financial, medical and social transactions. This rising dependence requires simultaneous high level of reliability, availability and security. This observation strengthens the importance of the design and implementation of resilient UbiComp systems.

One of the main challenges to providing pervasive IdM is to ensure the authenticity of devices and users and adaptive authorization in scenarios with multiple and heterogeneous security domains.

Several databases currently store sensitive data. Moreover, a vast number of sensors are constantly collecting new sensitive data and storing them in clouds. Privacy-preserving protocols are being designed and perfected to enhance user’s privacy in specific scenarios. Cultural interpretations of privacy, the variety of laws, big data from legacy systems in clouds, processing time, latency, key distribution and management, among other aforementioned are challenges for us to develop privacy-preserving protocols.

The convergence between the physical and digital systems poses both challenges and opportunities in offering forensic capabilities to facilitate the authentication of data as well as the access conditions including who, when, where, and how; a synergistic use of intrinsic and extrinsic evidences with interdisciplinary expertise will be the key.

Given these observations, and the importance of ubiquitous computing, it is easy to conclude that the future holds fascinating challenges waiting for the attention of the academia and the industry.

Finally, note the observations and the predictions presented in this work regarding how UbiComp may evolve represent our view of the field based on the technology landscape today. New scientific discoveries, technology inventions as well as economic, social, and policy factors may lead to new and/or different trends in the technology evolutionary paths.

https://competitions.cr.yp.to/caesar.html

https://csrc.nist.gov/projects/lightweight-cryptography

Deloitte’s annual Technology, Media and Telecommunications Predictions 2017 report: https://www2.deloitte.com/content/dam/Deloitte/global/Documents/Technology-Media-Telecommunications/ gx-deloitte-2017-tmt-predictions.pdf

https://www.fiware.org .

OAuth 2.0 core authorization framework is described by IETF in RFC 6749 and other specifications and profiles.

https://arx.deidentifier.org/

https://css.csail.mit.edu/cryptdb/

Abbreviations

Authentication and Authorization Infrastructure

Attribute Based Access Control

Access Control List

Advanced Encryption Standard

Capability Based Access Control

control flow graph

Certificateless cryptography

Distributed Denial of Service

Elliptic Curve Cryptography

Enhanced Client and Proxy

Electronic identity

Electric network frequency

Federated Identity Management Model

Hash-Based Signatures

Identity-based

Identity Management

Identity Provider

Identity Provider as a Service

Internet of things

Message Authentication Codes

Multi-factor authentication

Physical access control systems

Proof-Carrying Sensing

Policy Decision Point

Policy Decision Point as a Service

Policy Enforcement Point

Public key infrastructures

Post-quantum cryptography

Quantum cryptography

Role Based Access Control

Service Provider

Single Sign-On

Pervasive and ubiquitous computing

eXtensible Access Control Markup Language

Weiser M. The computer for the 21st century. Sci Am. 1991; 265(3):94–104.

Article   Google Scholar  

Weiser M. Some computer science issues in ubiquitous computing. Commun ACM. 1993; 36(7):75–84.

Lyytinen K, Yoo Y. Ubiquitous computing. Commun ACM. 2002; 45(12):63–96.

Estrin D, Govindan R, Heidemann JS, Kumar S. Next century challenges: Scalable coordination in sensor networks. In: MobiCom’99. New York: ACM: 1999. p. 263–70.

Google Scholar  

Pottie GJ, Kaiser WJ. Wireless integrated network sensors. Commun ACM. 2000; 43(5):51–8.

Ashton K. That ’Internet of Things’ Thing. RFiD J. 2009; 22:97–114.

Atzori L, Iera A, Morabito G. The internet of things: a survey. Comput Netw. 2010; 54(15):2787–805.

Article   MATH   Google Scholar  

Mann S. Wearable computing: A first step toward personal imaging. Computer. 1997; 30(2):25–32.

Martin T, Healey J. 2006’s wearable computing advances and fashions. IEEE Pervasive Comput. 2007; 6(1):14–6.

Lee EA. Cyber-physical systems-are computing foundations adequate. In: NSF Workshop On Cyber-Physical Systems: Research Motivation, Techniques and Roadmap, volume 2. Citeseer: 2006.

Rajkumar RR, Lee I, Sha L, Stankovic J. Cyber-physical systems: the next computing revolution. In: 47th Design Automation Conference. ACM: 2010.

Abowd GD, Mynatt ED. Charting past, present, and future research in ubiquitous computing. ACM Trans Comput Human Interact (TOCHI). 2000; 7(1):29–58.

Stajano F. Security for ubiquitous computing.Hoboken: Wiley; 2002.

Book   Google Scholar  

Pierce BC. Types and programming languages, 1st edition. Cambridge: The MIT Press; 2002.

MATH   Google Scholar  

Cousot P, Cousot R. Abstract interpretation: A unified lattice model for static analysis of programs by construction or approximation of fixpoints. In: POPL. New York: ACM: 1977. p. 238–52.

McMillan KL. Symbolic model checking. Norwell: Kluwer Academic Publishers; 1993.

Book   MATH   Google Scholar  

Leroy X. Formal verification of a realistic compiler. Commun ACM. 2009; 52(7):107–15.

Rice HG. Classes of recursively enumerable sets and their decision problems. Trans Amer Math Soc. 1953; 74(1):358–66.

Article   MathSciNet   MATH   Google Scholar  

Wilson RP, Lam MS. Efficient context-sensitive pointer analysis for c programs. In: PLDI. New York: ACM: 1995. p. 1–12.

Cadar C, Dunbar D, Engler D. KLEE: Unassisted and automatic generation of high-coverage tests for complex systems programs. In: OSDI. Berkeley: USENIX: 2008. p. 209–24.

Coppa E, Demetrescu C, Finocchi I. Input-sensitive profiling. In: PLDI. New York: ACM: 2012. p. 89–98.

Graham SL, Kessler PB, McKusick MK. gprof: a call graph execution profiler (with retrospective). In: Best of PLDI. New York: ACM: 1982. p. 49–57.

Godefroid P, Klarlund N, Sen K. Dart: directed automated random testing. In: PLDI. New York: ACM: 2005. p. 213–23.

Nethercote N, Seward J. Valgrind: a framework for heavyweight dynamic binary instrumentation. In: PLDI. New York: ACM: 2007. p. 89–100.

Luk C-K, Cohn R, Muth R, Patil H, Klauser A, Lowney G, Wallace S, Reddi VJ, Hazelwood K. Pin: Building customized program analysis tools with dynamic instrumentation. In: PLDI. New York: ACM: 2005. p. 190–200.

Rimsa AA, D’Amorim M, Pereira FMQ. Tainted flow analysis on e-SSA-form programs. In: CC. Berlin: Springer: 2011. p. 124–43.

Serebryany K, Bruening D, Potapenko A, Vyukov D. Addresssanitizer: a fast address sanity checker. In: ATC. Berkeley: USENIX: 2012. p. 28.

Russo A, Sabelfeld A. Dynamic vs. static flow-sensitive security analysis. In: CSF. Washington: IEEE: 2010. p. 186–99.

Carlini N, Barresi A, Payer M, Wagner D, Gross TR. Control-flow bending: On the effectiveness of control-flow integrity. In: SEC. Berkeley: USENIX: 2015. p. 161–76.

Klein G, Elphinstone K, Heiser G, Andronick J, Cock D, Derrin P, Elkaduwe D, Engelhardt K, Kolanski R, Norrish M, Sewell T, Tuch H, Winwood S. sel4: Formal verification of an os kernel. In: SOSP. New York: ACM: 2009. p. 207–20.

Jourdan J-H, Laporte V, Blazy S, Leroy X, Pichardie D. A formally-verified c static analyzer. In: POPL. New York: ACM: 2015. p. 247–59.

Soares LFG, Rodrigues RF, Moreno MF. Ginga-NCL: the declarative environment of the brazilian digital tv system. J Braz Comp Soc. 2007; 12(4):1–10.

Maas AJ, Nazaré H, Liblit B. Array length inference for c library bindings. In: ASE. New York: ACM: 2016. p. 461–71.

Fedrecheski G, Costa LCP, Zuffo MK. ISCE. Washington: IEEE: 2016.

Rellermeyer JS, Duller M, Gilmer K, Maragkos D, Papageorgiou D, Alonso G. The software fabric for the internet of things. In: IOT. Berlin, Heidelberg: Springer-Verlag: 2008. p. 87–104.

Furr M, Foster JS. Checking type safety of foreign function calls. ACM Trans Program Lang Syst. 2008; 30(4):18:1–18:63.

Dagenais B, Hendren L. OOPSLA. New York: ACM: 2008. p. 313–28.

Melo LTC, Ribeiro RG, de Araújo MR, Pereira FMQ. Inference of static semantics for incomplete c programs. Proc ACM Program Lang. 2017; 2(POPL):29:1–29:28.

Godefroid P. Micro execution. In: ICSE. New York: ACM: 2014. p. 539–49.

Manna Z, Waldinger RJ. Toward automatic program synthesis. Commun ACM. 1971; 14(3):151–65.

López HA, Marques ERB, Martins F, Ng N, Santos C, Vasconcelos VT, Yoshida N. Protocol-based verification of message-passing parallel programs. In: OOPSLA. New York: ACM: 2015. p. 280–98.

Bronevetsky G. Communication-sensitive static dataflow for parallel message passing applications. In: CGO. Washington: IEEE: 2009. p. 1–12.

Teixeira FA, Machado GV, Pereira FMQ, Wong HC, Nogueira JMS, Oliveira LB. Siot: Securing the internet of things through distributed system analysis. In: IPSN. New York: ACM: 2015. p. 310–21.

Lhoták O, Hendren L. Context-sensitive points-to analysis: Is it worth it? In: CC. Berlin, Heidelberg: Springer: 2006. p. 47–64.

Agha G. An overview of actor languages. In: OOPWORK. New York: ACM: 1986. p. 58–67.

Haller P, Odersky M. Actors that unify threads and events. In: Proceedings of the 9th International Conference on Coordination Models and Languages. COORDINATION’07. Berlin, Heidelberg: Springer-Verlag: 2007. p. 171–90.

Imam SM, Sarkar V. Integrating task parallelism with actors. In: OOPSLA. New York: ACM: 2012. p. 753–72.

Cousot P, Cousot R, Logozzo F. A parametric segmentation functor for fully automatic and scalable array content analysis. In: POPL. New York: ACM: 2011. p. 105–18.

Nazaré H, Maffra I, Santos W, Barbosa L, Gonnord L, Pereira FMQ. Validation of memory accesses through symbolic analyses. In: OOPSLA. New York: ACM: 2014.

Paisante V, Maalej M, Barbosa L, Gonnord L, Pereira FMQ. Symbolic range analysis of pointers. In: CGO. New York: ACM: 2016. p. 171–81.

Maalej M, Paisante V, Ramos P, Gonnord L, Pereira FMQ. Pointer disambiguation via strict inequalities. In: Proceedings of the 2017 International Symposium on Code Generation and Optimization, CGO ’17 . Piscataway: IEEE Press: 2017. p. 134–47.

Maalej M, Paisante V, Pereira FMQ, Gonnord L. Combining range and inequality information for pointer disambiguation. Sci Comput Program. 2018; 152(C):161–84.

Sui Y, Fan X, Zhou H, Xue J. Loop-oriented pointer analysis for automatic simd vectorization. ACM Trans Embed Comput Syst. 2018; 17(2):56:1–56:31.

Poovendran R. Cyber-physical systems: Close encounters between two parallel worlds [point of view]. Proc IEEE. 2010; 98(8):1363–6.

Conti JP. The internet of things. Commun Eng. 2006; 4(6):20–5.

Article   MathSciNet   Google Scholar  

Rinaldi SM, Peerenboom JP, Kelly TK. Identifying, understanding, and analyzing critical infrastructure interdependencies. IEEE Control Syst. 2001; 21(6):11–25.

US Bureau of Transportation Statistics BTS. Average age of automobiles and trucks in operation in the united states. 2017. Accessed 14 Sept 2017.

U.S. Department of Transportation. IEEE 1609 - Family of Standards for Wireless Access in Vehicular Environments WAVE. 2013.

Maurer M, Gerdes JC, Lenz B, Winner H. Autonomous driving: technical, legal and social aspects.Berlin: Springer; 2016.

Patel N. 90% of startups fail: Here is what you need to know about the 10%. 2015. https://www.forbes.com/sites/neilpatel/2015/01/16/90-of-startups-will-fail-heres-what-you-need-to-know-about-the-10/ . Accessed 09 Sept 2018.

Jacobsson A, Boldt M, Carlsson B. A risk analysis of a smart home automation system. Futur Gener Comput Syst. 2016; 56(Supplement C):719–33.

Rivest RL, Shamir A, Adleman LM. A method for obtaining digital signatures and public-key cryptosystems. Commun ACM. 1978; 21(2):120–6.

Miller VS. Use of elliptic curves in cryptography. In: CRYPTO, volume 218 of Lecture Notes in Computer Science. Berlin: Springer: 1985. p. 417–26.

Koblitz N. Elliptic curve cryptosystems. Math Comput. 1987; 48(177):203–9.

Barbulescu R, Gaudry P, Joux A, Thomé E. A heuristic quasi-polynomial algorithm for discrete logarithm in finite fields of small characteristic. In: EUROCRYPT 2014. Berlin: Springer: 2014. p. 1–16.

Diffie W, Hellman M. New directions in cryptography. IEEE Trans Inf Theor. 2006; 22(6):644–54.

Barker E. Federal Information Processing Standards Publication (FIPS PUB) 186-4 Digital Signature Standard (DSS). 2013.

Barker E, Johnson D, Smid M. Special publication 800-56A recommendation for pair-wise key establishment schemes using discrete logarithm cryptography. 2006.

Simon DR. On the power of quantum computation. In: Symposium on Foundations of Computer Science (SFCS 94). Washington: IEEE Computer Society: 1994. p. 116–23.

Knill E. Physics: quantum computing. Nature. 2010; 463(7280):441–3.

Grover LK. A fast quantum mechanical algorithm for database search. In: Proceedings of ACM STOC 1996. New York: ACM: 1996. p. 212–19.

Shor PW. Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM J Comput. 1997; 26(5):1484–509.

McEliece RJ. A public-key cryptosystem based on algebraic coding theory. Deep Space Netw. 1978; 44:114–6.

Merkle RC. Secrecy, authentication and public key systems / A certified digital signature. PhD thesis, Stanford. 1979.

Regev O. On lattices, learning with errors, random linear codes, and cryptography. In: Proceedings of ACM STOC ’05. STOC ’05. New York: ACM: 2005. p. 84–93.

Buchmann J, Dahmen E, Hülsing A. Xmss - a practical forward secure signature scheme based on minimal security assumptions In: Yang B-Y, editor. PQCrypto. Berlin: Springer: 2011. p. 117–29.

McGrew DA, Curcio M, Fluhrer S. Hash-Based Signatures. Internet Engineering Task Force (IETF). 2017. https://datatracker.ietf.org/doc/html/draft-mcgrew-hash-sigs-13 . Accessed 9 Sept 2018.

Bennett CH, Brassard G. Quantum cryptography: public key distribution and coin tossing. In: Proceedings of IEEE ICCSSP’84. New York: IEEE Press: 1984. p. 175–9.

Bos J, Costello C, Ducas L, Mironov I, Naehrig M, Nikolaenko V, Raghunathan A, Stebila D. Frodo: Take off the ring! practical, quantum-secure key exchange from LWE. Cryptology ePrint Archive, Report 2016/659. 2016. http://eprint.iacr.org/2016/659 .

Alkim E, Ducas L, Pöppelmann T, Schwabe P. Post-quantum key exchange - a new hope. Cryptology ePrint Archive, Report 2015/1092. 2015. http://eprint.iacr.org/2015/1092 .

Misoczki R, Tillich J-P, Sendrier N, PBarreto LSM. MDPC-McEliece: New McEliece variants from moderate density parity-check codes. In: IEEE International Symposium on Information Theory – ISIT’2013. Istambul: IEEE: 2013. p. 2069–73.

Hoffstein J, Pipher J, Silverman JH. Ntru: A ring-based public key cryptosystem. In: International Algorithmic Number Theory Symposium. Berlin: Springer: 1998. p. 267–88.

Bos J, Ducas L, Kiltz E, Lepoint T, Lyubashevsky V, Schanck JM, Schwabe P, Stehlé D. Crystals–kyber: a CCA-secure module-lattice-based KEM. IACR Cryptol ePrint Arch. 2017; 2017:634.

Aragon N, Barreto PSLM, Bettaieb S, Bidoux L, Blazy O, Deneuville J-C, Gaborit P, Gueron S, Guneysu T, Melchor CA, Misoczki R, Persichetti E, Sendrier N, Tillich J-P, Zemor G. BIKE: Bit flipping key encapsulation. Submission to the NIST Standardization Process on Post-Quantum Cryptography. 2017. https://csrc.nist.gov/Projects/Post-Quantum-Cryptography/Round-1-Submissions .

Barreto PSLM, Gueron S, Gueneysu T, Misoczki R, Persichetti E, Sendrier N, Tillich J-P. Cake: Code-based algorithm for key encapsulation. In: IMA International Conference on Cryptography and Coding. Berlin: Springer: 2017. p. 207–26.

Jao D, De Feo L. Towards quantum-resistant cryptosystems from supersingular elliptic curve isogenies. In: International Workshop on Post-Quantum Cryptography. Berlin: Springer: 2011. p. 19–34.

Costello C, Jao D, Longa P, Naehrig M, Renes J, Urbanik D. Efficient compression of sidh public keys. In: Annual International Conference on the Theory and Applications of Cryptographic Techniques. Berlin: Springer: 2017. p. 679–706.

Jao D, Azarderakhsh R, Campagna M, Costello C, DeFeo L, Hess B, Jalali A, Koziel B, LaMacchia B, Longa P, Naehrig M, Renes J, Soukharev V, Urbanik D. SIKE: Supersingular isogeny key encapsulation. Submission to the NIST Standardization Process on Post-Quantum Cryptography. 2017. https://csrc.nist.gov/Projects/Post-Quantum-Cryptography/Round-1-Submissions .

Galbraith SD, Petit C, Shani B, Ti YB. On the security of supersingular isogeny cryptosystems. In: International Conference on the Theory and Application of Cryptology and Information Security. Berlin: Springer: 2016. p. 63–91.

National Institute of Standards and Technology (NIST). Standardization Process on Post-Quantum Cryptography. 2016. http://csrc.nist.gov/groups/ST/post-quantum-crypto/ . Accessed 9 Sept 2018.

McGrew D, Kampanakis P, Fluhrer S, Gazdag S-L, Butin D, Buchmann J. State management for hash-based signatures. In: International Conference on Research in Security Standardization. Springer: 2016. p. 244–60.

Bernstein DJ, Hopwood D, Hülsing A, Lange T, Niederhagen R, Papachristodoulou L, Schneider M, Schwabe P, Wilcox-O’Hearn Z. SPHINCS: Practical Stateless Hash-Based Signatures. Berlin, Heidelberg: Springer Berlin Heidelberg; 2015. p. 368–97.

Barker E, Barker W, Burr W, Polk W, Smid M. Recommendation for key management part 1: General (revision 3). NIST Spec Publ. 2012; 800(57):1–147.

Waters B. Ciphertext-policy attribute-based encryption: An expressive, efficient, and provably secure realization. In: Public Key Cryptography. LNCS, 6571 vol.Berlin: Springer: 2011. p. 53–70.

Liu Z, Wong DS. Practical attribute-based encryption: Traitor tracing, revocation and large universe. Comput J. 2016; 59(7):983–1004.

Oliveira LB, Aranha DF, Gouvêa CPL, Scott M, Câmara DF, López J, Dahab R. Tinypbc: Pairings for authenticated identity-based non-interactive key distribution in sensor networks. Comput Commun. 2011; 34(3):485–93.

Kim T, Barbulescu R. Extended tower number field sieve: A new complexity for the medium prime case. In: CRYPTO (1). LNCS, 9814 vol.Berlin: Springer: 2016. p. 543–71.

Boneh D, Franklin MK. Identity-based encryption from the weil pairing. SIAM J Comput. 2003; 32(3):586–615.

Al-Riyami SS, Paterson KG. Certificateless public key cryptography. In: ASIACRYPT. LNCS, 2894 vol.Berlin: Springer: 2003. p. 452–73.

Boldyreva A, Goyal V, Kumar V. Identity-based encryption with efficient revocation. IACR Cryptol ePrint Arch. 2012; 2012:52.

Simplício Jr. MA, Silva MVM, Alves RCA, Shibata TKC. Lightweight and escrow-less authenticated key agreement for the internet of things. Comput Commun. 2017; 98:43–51.

Neto ALM, Souza ALF, Cunha ÍS, Nogueira M, Nunes IO, Cotta L, Gentille N, Loureiro AAF, Aranha DF, Patil HK, Oliveira LB. Aot: Authentication and access control for the entire iot device life-cycle. In: SenSys. New York: ACM: 2016. p. 1–15.

Mouha N. The design space of lightweight cryptography. IACR Cryptol ePrint Arch. 2015; 2015:303.

Daemen J, Rijmen V. The Design of Rijndael: AES - The Advanced Encryption Standard. Information Security and Cryptography. Berlin: Springer; 2002.

Grosso V, Leurent G, Standaert F-X, Varici K. Ls-designs: Bitslice encryption for efficient masked software implementations. In: FSE. LNCS, 8540 vol.Berlin: Springer: 2014. p. 18–37.

Dinu D, Perrin L, Udovenko A, Velichkov V, Großschädl J, Biryukov A. Design strategies for ARX with provable bounds: Sparx and LAX. In: ASIACRYPT (1). LNCS, 10031 vol.Berlin: Springer: 2016. p. 484–513.

Albrecht MR, Driessen B, Kavun EB, Leander G, Paar C, Yalçin T. Block ciphers - focus on the linear layer (feat. PRIDE). In: CRYPTO (1). LNCS, 8616 vol.Berlin: Springer: 2014. p. 57–76.

Beierle C, Jean J, Kölbl S, Leander G, Moradi A, Peyrin T, Sasaki Y, Sasdrich P, Sim SM. The SKINNY family of block ciphers and its low-latency variant MANTIS. In: CRYPTO (2). LNCS, 9815 vol.Berlin: Springer: 2016. p. 123–53.

Bogdanov A, Knudsen LR, Leander G, Paar C, Poschmann A, Robshaw MJB, Seurin Y, Vikkelsoe C. PRESENT: an ultra-lightweight block cipher. In: CHES. LNCS, 4727 vol.Berlin: Springer: 2007. p. 450–66.

Reis TBS, Aranha DF, López J. PRESENT runs fast - efficient and secure implementation in software. In: CHES, volume 10529 of Lecture Notes in Computer Science. Berlin: Springer: 2017. p. 644–64.

Aumasson J-P, Bernstein DJ. Siphash: A fast short-input PRF. In: INDOCRYPT. LNCS, 7668 vol.Berlin: Springer: 2012. p. 489–508.

Kölbl S, Lauridsen MM, Mendel F, Rechberger C. Haraka v2 - efficient short-input hashing for post-quantum applications. IACR Trans Symmetric Cryptol. 2016; 2016(2):1–29.

Aumasson J-P, Neves S, Wilcox-O’Hearn Z, Winnerlein C. BLAKE2: simpler, smaller, fast as MD5. In: ACNS. LNCS, 7954 vol.Berlin: Springer: 2013. p. 119–35.

Stevens M, Karpman P, Peyrin T. Freestart collision for full SHA-1. In: EUROCRYPT (1). LNCS, 9665 vol.Berlin: Springer: 2016. p. 459–83.

NIST Computer Security Division. SHA-3 Standard: Permutation-Based Hash and Extendable-Output Functions. FIPS Publication 202, National Institute of Standards and Technology, U.S. Department of Commerce, May 2014.

McGrew DA, Viega J. The security and performance of the galois/counter mode (GCM) of operation. In: INDOCRYPT. LNCS, 3348 vol.Berlin: Springer: 2004. p. 343–55.

Koblitz N. A family of jacobians suitable for discrete log cryptosystems. In: CRYPTO, volume 403 of LNCS. Berlin: Springer: 1988. p. 94–99.

Bernstein DJ. Curve25519: New diffie-hellman speed records. In: Public Key Cryptography. LNCS, 3958 vol.Berlin: Springer: 2006. p. 207–28.

Bernstein DJ, Duif N, Lange T, Schwabe P, Yang B-Y. High-speed high-security signatures. J Cryptographic Eng. 2012; 2(2):77–89.

Costello C, Longa P. Four \(\mathbb {Q}\) : Four-dimensional decompositions on a \(\mathbb {Q}\) -curve over the mersenne prime. In: ASIACRYPT (1). LNCS, 9452 vol.Berlin: Springer: 2015. p. 214–35.

Banik S, Bogdanov A, Regazzoni F. Exploring energy efficiency of lightweight block ciphers. In: SAC. LNCS, 9566 vol.Berlin: Springer: 2015. p. 178–94.

Dinu D, Corre YL, Khovratovich D, Perrin L, Großschädl J, Biryukov A. Triathlon of lightweight block ciphers for the internet of things. NIST Workshop on Lightweight Cryptography. 2015.

Kocher PC. Timing attacks on implementations of diffie-hellman, rsa, dss, and other systems. In: CRYPTO. LNCS, 1109 vol.Berlin: Springer: 1996. p. 104–13.

Rodrigues B, Pereira FMQ, Aranha DF. Sparse representation of implicit flows with applications to side-channel detection In: Zaks A, Hermenegildo MV, editors. Proceedings of the 25th International Conference on Compiler Construction, CC 2016, Barcelona, Spain, March 12-18, 2016. New York: ACM: 2016. p. 110–20.

Almeida JB, Barbosa M, Barthe G, Dupressoir F, Emmi M. Verifying constant-time implementations. In: USENIX Security Symposium. Berkeley: USENIX Association: 2016. p. 53–70.

Kocher PC, Jaffe J, Jun B. Differential power analysis. In: CRYPTO. LNCS, 1666 vol. Springer: 1999. p. 388–97.

Biham E, Shamir A. Differential fault analysis of secret key cryptosystems. In: CRYPTO. LNCS, 1294 vol.Berlin: Springer: 1997. p. 513–25.

Kim Y, Daly R, Kim J, Fallin C, Lee J-H, Lee D, Wilkerson C, Lai K, Mutlu O. Flipping bits in memory without accessing them: An experimental study of DRAM disturbance errors. In: ISCA. Washington, DC: IEEE Computer Society: 2014. p. 361–72.

Ishai Y, Sahai A, Wagner D. Private circuits: Securing hardware against probing attacks. In: CRYPTO. LNCS, 2729 vol. Springer: 2003. p. 463–81.

Balasch J, Gierlichs B, Grosso V, Reparaz O, Standaert F-X. On the cost of lazy engineering for masked software implementations. In: CARDIS. LNCS, 8968 vol.Berlin: Springer: 2014. p. 64–81.

Nogueira M, dos Santos AL, Pujolle G. A survey of survivability in mobile ad hoc networks. IEEE Commun Surv Tutor. 2009; 11(1):66–77.

Mansfield-Devine S. The growth and evolution of ddos. Netw Secur. 2015; 2015(10):13–20.

Thielman S, Johnston C. Major Cyber Attack Disrupts Internet Service Across Europe and US. https://www.theguardian.com/technology/2016/oct/21/ddos-attack-dyn-internet-denial-service . Accessed 3 July 2018.

DDoS attacks: For the hell of it or targeted – how do you see them off? http://www.theregister.co.uk/2016/09/22/ddos_attack_defence/ . Accessed 14 Feb 2017.

Santos AA, Nogueira M, Moura JMF. A stochastic adaptive model to explore mobile botnet dynamics. IEEE Commun Lett. 2017; 21(4):753–6.

Macedo R, de Castro R, Santos A, Ghamri-Doudane Y, Nogueira M. Self-organized SDN controller cluster conformations against DDoS attacks effects. In: 2016 IEEE Global Communications Conference, GLOBECOM, 2016, Washington, DC, USA, December 4–8, 2016. Piscataway: IEEE: 2016. p. 1–6.

Soto J, Nogueira M. A framework for resilient and secure spectrum sensing on cognitive radio networks. Comput Netw. 2015; 79:313–22.

Lipa N, Mannes E, Santos A, Nogueira M. Firefly-inspired and robust time synchronization for cognitive radio ad hoc networks. Comput Commun. 2015; 66:36–44.

Zhang C, Song Y, Fang Y. Modeling secure connectivity of self-organized wireless ad hoc networks. In: IEEE INFOCOM. Piscataway: IEEE: 2008. p. 251–5.

Salem NB, Hubaux J-P. Securing wireless mesh networks. IEEE Wirel Commun. 2006; 13(2):50–5.

Yang H, Luo H, Ye F, Lu S, Zhang L. Security in mobile ad hoc networks: challenges and solutions. IEEE Wirel Commun. 2004; 11(1):38–47.

Nogueira M. SAMNAR: A survivable architecture for wireless self-organizing networks. PhD thesis, Université Pierre et Marie Curie - LIP6. 2009.

ITU. NGN identity management framework: International Telecommunication Union (ITU); 2009. Recommendation Y.2720.

Lopez J, Oppliger R, Pernul G. Authentication and authorization infrastructures (aais): a comparative survey. Comput Secur. 2004; 23(7):578–90.

Arias-Cabarcos P, Almenárez F, Trapero R, Díaz-Sánchez D, Marín A. Blended identity: Pervasive idm for continuous authentication. IEEE Secur Priv. 2015; 13(3):32–39.

Bhargav-Spantzel A, Camenisch J, Gross T, Sommer D. User centricity: a taxonomy and open issues. J Comput Secur. 2007; 15(5):493–527.

Garcia-Morchon O, Kumar S, Sethi M, Internet Engineering Task Force. State-of-the-art and challenges for the internet of things security. Internet Engineering Task Force; 2017. https://datatracker.ietf.org/doc/html/draft-irtf-t2trg-iot-seccons-04 .

Torres J, Nogueira M, Pujolle G. A survey on identity management for the future network. IEEE Commun Surv Tutor. 2013; 15(2):787–802.

Hanumanthappa P, Singh S. Privacy preserving and ownership authentication in ubiquitous computing devices using secure three way authentication. In: Proceedings. International Conference on Innovations in Information Technology (IIT): 2012. p. 107–12.

Fremantle P, Aziz B, Kopecký J, Scott P. Federated identity and access management for the internet of things. In: 2014 International Workshop on Secure Internet of Things: 2014. p. 10–17.

Domenech MC, Boukerche A, Wangham MS. An authentication and authorization infrastructure for the web of things. In: Proceedings of the 12th ACM Symposium on QoS and Security for Wireless and Mobile Networks, Q2SWinet ’16. New York: ACM: 2016. p. 39–46.

Birrell E, Schneider FB. Federated identity management systems: A privacy-based characterization. IEEE Secur Priv. 2013; 11(5):36–48.

Nguyen T-D, Al-Saffar A, Huh E-N. A dynamic id-based authentication scheme. In: Proceedings. Sixth International Conference on Networked Computing and Advanced Information Management (NCM), 2010.2010. p. 248–53.

Gusmeroli S, Piccione S, Rotondi D. A capability-based security approach to manage access control in the internet of things. Math Comput Model. 2013; 58:1189–205.

Akram H, Hoffmann M. Supports for identity management in ambient environments-the hydra approach. In: Proceedings. 3rd International Conference on Systems and Networks Communications, 2008. ICSNC’08.2008. p. 371–7.

Liu J, Xiao Y, Chen CLP. Authentication and access control in the internet of things. In: Proceedings. 32nd International Conference on Distributed Computing Systems Workshops (ICDCSW) 2012.2012. p. 588–92.

Ndibanje B, Lee H-J, Lee S-G. Security analysis and improvements of authentication and access control in the internet of things. Sensors. 2014; 14(8):14786–805.

Kim Y-P, Yoo S, Yoo C. Daot: Dynamic and energy-aware authentication for smart home appliances in internet of things. In: Consumer Electronics (ICCE), 2015 IEEE International Conference on.2015. p. 196–7.

Markmann T, Schmidt TC, Wählisch M. Federated end-to-end authentication for the constrained internet of things using ibc and ecc. SIGCOMM Comput Commun Rev. 2015; 45(4):603–4.

Dasgupta D, Roy A, Nag A. Multi-factor authentication. Cham: Springer International Publishing; 2017. p. 185–233.

NIST. Digital Identity Guidelines. NIST Special Publication 800-63-3. 2017. https://doi.org/10.6028/NIST.SP.800-63-3 .

Dzurenda P, Hajny J, Zeman V, Vrba K. Modern physical access control systems and privacy protection. In: 2015 38th International Conference on Telecommunications and Signal Processing (TSP).2015. p. 1–5.

Guinard D, Fischer M, Trifa V. Sharing using social networks in a composable web of things. In: Proceedings. 8th IEEE International Conference on Pervasive Computing and Communications Workshops (PERCOM Workshops), 2010.2010. p. 702–7.

Rotondi D, Seccia C, Piccione S. Access control & IoT: Capability based authorization access control system. In: Proceedings. 1st IoT International Forum: 2011.

Mahalle PN, Anggorojati B, Prasad NR, Prasad R. Identity authentication and capability based access control (iacac) for the internet of things. J Cyber Secur Mob. 2013; 1(4):309–48.

Moreira Sá De Souza L, Spiess P, Guinard D, Köhler M, Karnouskos S, Savio D. Socrades: A web service based shop floor integration infrastructure. In: The internet of things. Springer: 2008. p. 50–67.

Jindou J, Xiaofeng Q, Cheng C. Access control method for web of things based on role and sns. In: Proceedings. IEEE 12th International Conference on Computer and Information Technology (CIT), 2012. Washington: IEEE Computer Society: 2012. p. 316–21.

Han Q, Li J. An authorization management approach in the internet of things. J Inf Comput Sci. 2012; 9(6):1705–13.

Zhang G, Liu J. A model of workflow-oriented attributed based access control. Int J Comput Netw Inf Secur (IJCNIS). 2011; 3(1):47–53.

do Prado Filho TG, Vinicius Serafim Prazeres C. Multiauth-wot: A multimodal service for web of things athentication and identification. In: Proceedings of the 21st Brazilian Symposium on Multimedia and the Web, WebMedia ’15. New York: ACM: 2015. p. 17–24.

Alam S, Chowdhury MMR, Noll J. Interoperability of security-enabled internet of things. Wirel Pers Commun. 2011; 61(3):567–86.

Seitz L, Selander G, Gehrmann C. Authorization framework for the internet-of-things. In: Proceedings. IEEE 14th International Symposium and Workshops on a World of Wireless, Mobile and Multimedia Networks (WoWMoM). Washington, DC: IEEE Computer Society: 2013. p. 1–6.

OASIS. Saml v2.0 executive overview. 2005. https://www.oasis-open.org/committees/download.php/13525/sstc-saml-exec-overview-2.0-cd-01-2col.pdf .

Hardt D. The oauth 2.0 authorization framework. RFC 6749, RFC Editor; 2012. http://www.rfc-editor.org/rfc/rfc6749.txt .

Maler E, Reed D. The venn of identity: Options and issues in federated identity management. IEEE Secur Priv. 2008; 6(2):16–23.

Naik N, Jenkins P. Securing digital identities in the cloud by selecting an apposite federated identity management from saml, oauth and openid connect. In: 2017 11th International Conference on Research Challenges in Information Science (RCIS). Piscataway: IEEE: 2017. p. 163–74.

OASIS. Authentication context for the oasis security assertion markup language (saml) v2.0. 2005. http://docs.oasis-open.org/security/saml/v2.0/saml-authn-context-2.0-os.pdf .

Paci F, Ferrini R, Musci A, Jr KS, Bertino E. An interoperable approach to multifactor identity verification. Computer. 2009; 42(5):50–7.

Pöhn D, Metzger S, Hommel W. Géant-trustbroker: Dynamic, scalable management of saml-based inter-federation authentication and authorization infrastructures In: Cuppens-Boulahia N, Cuppens F, Jajodia S, El Kalam AA, Sans T, editors. ICT Systems Security and Privacy Protection. Berlin, Heidelberg: Springer Berlin Heidelberg: 2014. p. 307–20.

Zeng D, Guo S, Cheng Z. The web of things: A survey. J Commun. 2011;6(6). http://ojs.academypublisher.com/index.php/jcm/article/view/jcm0606424438 .

The OpenID Foundation. Openid connect core 1.0. 2014. http://openid.net/specs/openid-connect-core-1\_0.html .

Domenech MC, Comunello E, Wangham MS. Identity management in e-health: A case study of web of things application using openid connect. In: 2014 IEEE 16th International Conference on e-Health Networking, Applications and Services (Healthcom). Piscataway: IEEE: 2014. p. 219–24.

OASIS. Extensible access control markup language (xacml) version 3.0. 2013. http://docs.oasis-open.org/xacml/3.0/xacml-3.0-core-spec-os-en.pdf .

Borges F, Demirel D, Bock L, Buchmann JA, Mühlhäuser M. A privacy-enhancing protocol that provides in-network data aggregation and verifiable smart meter billing. In: ISCC. USA: IEEE: 2014. p. 1–6.

Borges de Oliveira F. Background and Models. Cham: Springer International Publishing; 2017. p. 13–23.

Borges de Oliveira F. Reasons to Measure Frequently and Their Requirements. Cham: Springer International Publishing; 2017. p. 39–47.

Holvast J. The Future of Identity in the Information Society, volume 298 of IFIP Advances in Information and Communication Technology In: Matyáš V, Fischer-Hübner S, Cvrček D, Švenda P, editors. Berlin: Springer Berlin Heidelberg: 2009. p. 13–42.

Toshniwal D. Privacy preserving data mining techniques privacy preserving data mining techniques for hiding sensitive data hiding sensitive data: a step towards open data open data. Singapore: Springer Singapore: 2018. p. 205–12.

Li N, Li T, Venkatasubramanian S. t-closeness: Privacy beyond k-anonymity and l-diversity. In: 2007 IEEE 23rd International Conference on Data Engineering. USA: IEEE: 2007. p. 106–15.

De Montjoye Y-A, Hidalgo CA, Verleysen M, Blondel VD. Unique in the crowd: The privacy bounds of human mobility. Sci Rep. 2013; 3:1–5.

Borges de Oliveira F. Quantifying the aggregation size. Cham: Springer International Publishing; 2017. p. 49–60.

Gentry C. A Fully Homomorphic Encryption Scheme. Stanford: Stanford University; 2009. AAI3382729.

Borges de Oliveira F. A Selective Review. Cham: Springer International Publishing; 2017. p. 25–36.

Borges de Oliveira F. Selected Privacy-Preserving Protocols. Cham: Springer International Publishing; 2017. p. 61–100.

Borges F, Lara P, Portugal R. Parallel algorithms for modular multi-exponentiation. Appl Math Comput. 2017; 292:406–16.

MathSciNet   Google Scholar  

Stamm MC, Wu M, Liu KJR. Information forensics: An overview of the first decade. IEEE Access. 2013; 1:167–200.

Wu M, Quintão Pereira FM, Liu J, Ramos HS, Alvim MS, Oliveira LB. New directions: Proof-carrying sensing — Towards real-world authentication in cyber-physical systems. In: Proceedings of ACM Conf. on Embedded Networked Sensor Systems (SenSys). New York: ACM: 2017.

Grigoras C. Applications of ENF analysis in forensic authentication of digital audio and video recordings. J Audio Eng Soc. 2009; 57(9):643–61.

Garg R, Varna AL, Hajj-Ahmad A, Wu M. “seeing” enf: Power-signature-based timestamp for digital multimedia via optical sensing and signal processing. TIFS. 2013; 8(9):1417–32.

Satchidanandan B, Kumar PR. Dynamic watermarking: Active defense of networked cyber–physical systems. Proc IEEE. 2017; 105(2):219–40.

Download references

Acknowledgments

We would like to thank Artur Souza for contributing with fruitful discussions to this work.

This work was partially supported by the CNPq, NSF, RNP, FAPEMIG, FAPERJ, and CAPES.

Availability of data and materials

Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.

Author information

Authors and affiliations.

UFMG, Av. Antônio Carlos, 6627, Prédio do ICEx, Anexo U, sala 6330 Pampulha, Belo Horizonte, MG, Brasil

Leonardo B. Oliveira

Federal University of Minas Gerais, Belo Horizonte, Campinas, Brasil

Fernando Magno Quintão Pereira

Intel Labs, Hillsboro, Campinas, Brasil

Rafael Misoczki

University of Campinas, Campinas, Brasil

Diego F. Aranha

National Laboratory for Scientific Computing, Campinas, Petrópolis, Brasil

Fábio Borges

Federal University of Paraná, Campinas, Curitiba, Brasil

Michele Nogueira

Universidade do Vale do Itajaí, Campinas, Florianópolis, Brasil

Michelle Wangham

University of Maryland, Maryland, USA

Microsoft Research, Redmond, MD, USA

You can also search for this author in PubMed   Google Scholar

Contributions

All authors wrote and reviewed the manuscript. Mainly, LBO focused on the introduction and the whole paper conception, FM focused on Software Protection, RM focused on Long-Term Security, DFA focused on Cryptographic Engineering, MN focused on Resilience, MW focused on Identity Management, FB focused on Privacy, MW focused on Forensics, JL focused on the conclusion and the whole paper conception. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Leonardo B. Oliveira .

Ethics declarations

Competing interests.

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional information

Authors’ information.

Leonardo B. Oliveira is an associate professor of the CS Department at UFMG, a visiting associate professor of the CS Department at Stanford, and a research productivity fellow of the Brazilian Research Council (CNPq). Leonardo has been awarded the Microsoft Research Ph.D. Fellowship Award, the IEEE Young Professional Award, and the Intel Strategic Research Alliance Award. He published papers on the security of IoT/Cyber-Physical Systems in publication venues like IPSN and SenSys, and he is the (co)inventor of an authentication scheme for IoT (USPTO Patent Application No. 62287832). Leonardo served as General Chair and TPC Chair of the Brazilian Symposium on Security (SBSeg) in 2014 and 2016, respectively, and as a member in the Advisory Board of the Special Interest Group on Information and Computer System Security (CESeg) of the Brazilian Computer Society. He is a member of the Technical Committee of Identity Management (CT-GId) of the Brazilian National Research and Education Network (RNP).

Fernando M. Q. Pereira is an associate professor at UFMG’s Computer Science Department. He got his Ph.D at the University of California, Los Angeles, in 2008, and since then does research in the field of compilers. He seeks to develop techniques that let programmers to produce safe, yet efficient code. Fernando’s portfolio of analyses and optimizations is available at http://cuda.dcc.ufmg.br/ . Some of these techniques found their way into important open source projects, such as LLVM, PHC and Firefox.

Rafael Misoczki is a Research Scientist at Intel Labs, USA. His work is focused on post-quantum cryptography and conventional cryptography. He contributes to international standardization efforts on cryptography (expert member of the USA delegation for ISO/IEC JTC1 SC27 WG2, expert member of INCITS CS1, and submitter to the NIST standardization competition on post-quantum cryptography). He holds a PhD degree from Sorbonne Universités (University of Paris - Pierre et Marie Curie), France (2013). He also holds an MSc. degree in Electrical Engineering (2010) and a BSc. degree in Computer Science (2008), both from the Universidade de São Paulo, Brazil.

Diego F. Aranha is an Assistant Professor in the Institute of Computing at the University of Campinas (Unicamp). He holds a PhD degree in Computer Science from the University of Campinas and has worked as a visiting PhD student for 1 year at the University of Waterloo. His professional experience is in Cryptography and Computer Security, with a special interest in the efficient implementation of cryptographic algorithms and security analysis of real world systems. Coordinated the first team of independent researchers capable of detecting and exploring vulnerabilities in the software of the Brazilian voting machine during controlled tests organized by the electoral authority. He received the Google Latin America Research Award for research on privacy twice, and the MIT TechReview’s Innovators Under 35 Brazil Award for his work in electronic voting.

Fábio Borges is Professor in the doctoral program at Brazilian National Laboratory for Scientific Computing (LNCC in Portuguese). He holds a Ph.D. degree in Doctor of Engineering (Dr.-Ing.) in the Department of Computer Science at TU Darmstadt, a master’s degree in Computational Modeling at LNCC, and a bachelor’s degree in mathematics at Londrina State University (UEL). Currently, he is developing research at the LNCC in the field of Algorithms, Security, Privacy, and Smart Grid. Further information is found at http://www.lncc.br/~borges/ .

Michele Nogueira is an Associate Professor of the Computer Science Department at Federal University of Paraná. She received her doctorate in Computer Science from the UPMC — Sorbonne Universités, Laboratoire d’Informatique de Paris VI (LIP6) in 2009. Her research interests include wireless networks, security and dependability. She has been working on providing resilience to self-organized, cognitive and wireless networks by adaptive and opportunistic approaches for many years. Dr. Nogueira was one of the pioneers in addressing survivability issues in self-organized wireless networks, being her works “A Survey of Survivability in Mobile Ad Hoc Networks” and “An Architecture for Survivable Mesh Networking” her prominent scientific contributions. She is an Associate Technical Editor for the IEEE Communications Magazine and the Journal of Network and Systems Management. She serves as Vice-chair for the IEEE ComSoc - Internet Technical Committee. She is an ACM and IEEE Senior Member.

Michelle S. Wangham is a Professor at University of Vale do Itajaí (Brazil). She received her M.Sc. and Ph.D. on Electrical Engineering from the Federal University of Santa Catarina (UFSC) in 2004. Recently, she was a Visiting Researcher at University of Ottawa. Her research interests are vehicular networks, security in embedded and distributed systems, identity management, and network security. She is a consultant of the Brazilian National Research and Education Network (RNP) acting as the coordinator of Identity Management Technical Committee (CT-GID) and member of Network Monitoring Technical Committee. Since 2013, she is coordinating the GIdLab project, a testbed for R&D in Identity Management.

Min Wu received the B.E. degree (Highest Honors) in electrical engineering - automation and the B.A. degree (Highest Honors) in economics from Tsinghua University, Beijing, China, in 1996, and the Ph.D. degree in electrical engineering from Princeton University in 2001. Since 2001, she has been with the University of Maryland, College Park, where she is currently a Professor and a University Distinguished Scholar-Teacher. She leads the Media and Security Team, University of Maryland, where she is involved in information security and forensics and multimedia signal processing. She has coauthored two books and holds nine U.S. patents on multimedia security and communications. Dr. Wu coauthored several papers that won awards from the IEEE, ACM, and EURASIP, respectively. She also received an NSF CAREER award in 2002, a TR100 Young Innovator Award from the MIT Technology Review Magazine in 2004, an ONR Young Investigator Award in 2005, a ComputerWorld “40 Under 40” IT Innovator Award in 2007, an IEEE Mac Van Valkenburg Early Career Teaching Award in 2009, a University of Maryland Invention of the Year Award in 2012 and in 2015, and an IEEE Distinguished Lecturer recognition in 2015–2016. She has served as the Vice President-Finance of the IEEE Signal Processing Society (2010–2012) and the Chair of the IEEE Technical Committee on Information Forensics and Security (2012–2013). She is currently the Editor-in-Chief of the IEEE Signal Processing Magazine. She was elected IEEE Fellow for contributions to multimedia security and forensics.

Jie Liu Dr. Jie Liu is a Principal Researcher at Microsoft AI and Research Redmond, WA. His research interests root in sensing and interacting with the physical world through computing. Examples include time, location, and energy awareness, and Internet/Intelligence of Things. He has published broadly in areas such as sensor networking, embedded devices, mobile and ubiquitous computing, and data center management. He has received 6 best paper awards in top academic conferences in these fields. In addition, he holds more than 100 patents. He is the Steering Committee chair of Cyber-Physical-System (CPS) Week, and ACM/IEEE IPSN, and a Steering Committee member of ACM SenSys. He is an Associate Editor of ACM Trans. on Sensor Networks, was an Associate Editor of IEEE Trans. on Mobile Computing, and has chaired a number of top-tier conferences. Among other recognitions, he received the Leon Chua Award from UC Berkeley in 2001; Technology Advance Award from (Xerox) PARC in 2003; and a Gold Star Award from Microsoft in 2008. He received his Ph.D. degree from Electrical Engineering and Computer Sciences, UC Berkeley in 2001, and his Master and Bachelor degrees from Department of Automation, Tsinghua University, Beijing, China. From 2001 to 2004, he was a research scientist in Palo Alto Research Center (formerly Xerox PARC). He is an ACM Distinguished Scientist and an IEEE Senior Member.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Cite this article.

Oliveira, L., Pereira, F., Misoczki, R. et al. The computer for the 21st century: present security & privacy challenges. J Internet Serv Appl 9 , 24 (2018). https://doi.org/10.1186/s13174-018-0095-2

Download citation

Received : 13 April 2018

Accepted : 27 August 2018

Published : 04 December 2018

DOI : https://doi.org/10.1186/s13174-018-0095-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Cryptography

essay on technology in 21st century

Logo

Essay on Life In 21st Century

Students are often asked to write an essay on Life In 21st Century in their schools and colleges. And if you’re also looking for the same, we have created 100-word, 250-word, and 500-word essays on the topic.

Let’s take a look…

100 Words Essay on Life In 21st Century

Introduction.

Life in the 21st century is marked by rapid changes and advancements. It’s a time of technology, innovation, and global connections. We have seen improvements in many areas, from communication to healthcare.

Globalization

Our world has become a global village. We can communicate with people from different cultures and backgrounds. This has increased our understanding and appreciation of diversity.

Advancements in healthcare have improved our lives. New treatments and medicines have been developed. Diseases that were once deadly can now be cured or managed.

Life in the 21st century is exciting but also challenging. We must use the advancements wisely and work together to overcome the challenges.

250 Words Essay on Life In 21st Century

The 21st century lifestyle.

The 21st century is a time of rapid change and progress. We live in a world where technology is at our fingertips, making our lives easier and more comfortable.

Technology and Communication

One of the most significant changes in the 21st century is the advancement in technology. Today, we can communicate with anyone, anywhere in the world, in just a few seconds. Smartphones, the internet, and social media platforms have transformed the way we interact.

Education and Learning

Education has also seen a massive transformation. Online learning is now a reality, allowing students to learn at their own pace from anywhere. This has made education more accessible to everyone, regardless of their location or circumstances.

Health and Medicine

The field of health and medicine has also evolved. New treatments and medicines have been developed, increasing the average lifespan. People are now more aware of their health and are taking steps to lead healthier lives.

Challenges of the 21st Century

In conclusion, life in the 21st century is a mix of advancements and challenges. We have the tools to make our lives better, but we also have the responsibility to use them wisely for the benefit of all.

500 Words Essay on Life In 21st Century

Life in the 21st century is full of excitement and challenges. It is a time of rapid change and amazing progress. We live in an era where technology has become a key part of our lives.

One of the most important parts of life in the 21st century is technology. It has made our lives easier in many ways. We can now talk to friends and family who live far away by using our phones or computers. We can also use the internet to learn new things and find information quickly. But, it’s important to remember that too much screen time can be bad for our health. We need to balance our use of technology with other activities.

Health and Lifestyle

Life in the 21st century is also different because of changes in our health and lifestyle. We now know more about how to take care of our bodies. We understand the importance of eating healthy food and exercising regularly. But, the busy pace of life can make it hard to find time for these things.

Environment

Life in the 21st century is full of both challenges and opportunities. We have amazing technology and access to information. But, we also face problems like taking care of our health and the environment. It’s an exciting time to be alive, and we all have a part to play in shaping the future.

Remember, the 21st century is our time. Let’s make the most of it!

That’s it! I hope the essay helped you.

Apart from these, you can look at all the essays by clicking here .

Happy studying!

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

essay on technology in 21st century

  • DOI: 10.60027/ijsasr.2024.4088
  • Corpus ID: 270373129

Strategies for Integrating Technology into Physical Education Pedagogy in the 21st Century

  • Zhaoxi Li , Saknasan Jintasakul , Prakit Hongsaenyatham
  • Published in International Journal of… 9 June 2024
  • Education, Engineering

22 References

Integration of information technology and pe teaching process, an investigation of digital thinking skills in efl digital instruction, the multimedia approach in teaching physical fitness as basis for physical education program, tracing research trends of 21st-century learning skills, multimedia as a new approach for learning in physical education, covid-19 and teacher education: a literature review of online teaching and learning practices, the effect of learners’ sex and stem/non-stem majors on remote learning: a national study of undergraduates in qatar, design of web‐based interactive 3d concept maps: a preliminary study for an engineering drawing course, application prospects for wearable body surface microfluidic system in sports, reimagining our futures together: a new social contract for education, related papers.

Showing 1 through 3 of 0 Related Papers

  • Life + Home
  • Entertainment
  • Newsletters
  • For Subscribers
  • Corrections
  • Contributor Content
  • Celebrate Michigan

essay on technology in 21st century

We've detected unusual activity from your computer network

To continue, please click the box below to let us know you're not a robot.

Why did this happen?

Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy .

For inquiries related to this message please contact our support team and provide the reference ID below.

essay on technology in 21st century

Read The Diplomat , Know The Asia-Pacific

  • Central Asia
  • Southeast Asia
  • Environment
  • Asia Defense
  • China Power
  • Crossroads Asia

Flashpoints

  • Pacific Money
  • Tokyo Report
  • Trans-Pacific View
  • Photo Essays
  • Write for Us
  • Subscriptions

Putin’s Visit Symbolizes North Korea’s Changing Foreign Policy

Recent features.

Five Decades On, Cambodia Is Taking Ownership of Its Troubled Past

Five Decades On, Cambodia Is Taking Ownership of Its Troubled Past

Myanmar’s Conflict Reaches the Doorstep of Bangladesh’s Saint Martin’s Island

Myanmar’s Conflict Reaches the Doorstep of Bangladesh’s Saint Martin’s Island

China, Taiwan, and the Future of Guatemala

China, Taiwan, and the Future of Guatemala

What Will Modi 3.0 Mean for China-India Relations?

What Will Modi 3.0 Mean for China-India Relations?

Re-Thinking New Zealand’s Independent Foreign Policy

Re-Thinking New Zealand’s Independent Foreign Policy

Asia on Edge: What MAGA Think Tanks Reveal About a Trump 2.0 Presidency

Asia on Edge: What MAGA Think Tanks Reveal About a Trump 2.0 Presidency

The Burning of Buthidaung: Allegations, Denials, and Silence in Myanmar’s Rakhine State

The Burning of Buthidaung: Allegations, Denials, and Silence in Myanmar’s Rakhine State

China’s 6th Generation and Upcoming Combat Aircraft: 2024 Update

China’s 6th Generation and Upcoming Combat Aircraft: 2024 Update

Can Taiwan’s Divided Legislature Come Together on Defense?

Can Taiwan’s Divided Legislature Come Together on Defense?

Keeping Kyrgyz Journalism Afloat While the Island of Democracy Sinks

Keeping Kyrgyz Journalism Afloat While the Island of Democracy Sinks

Corruption Issues Loom Large as Mongolia Prepares to Vote

Corruption Issues Loom Large as Mongolia Prepares to Vote

Low Voter Turnout, Apathy Mar Bangladesh’s Local Elections

Low Voter Turnout, Apathy Mar Bangladesh’s Local Elections

Flashpoints  |  diplomacy  |  east asia.

The Russian president’s trip to Pyongyang is a sign of substantial shifts in North Korean ideology and diplomacy. Meanwhile, the U.S. playbook on North Korea has not changed in decades.

Putin’s Visit Symbolizes North Korea’s Changing Foreign Policy

North Korean leader Kim Jong Un (left) shakes hands with Russian President Vladimir Putin during a meeting at the Vostochny Cosmodrome, Sep. 13, 2023.

Two international crises have profoundly shaped the leadership decisions of North Korean leader Kim Jong Un: COVID-19 and Russia’s illegal invasion of Ukraine. Both of these crises have led to substantial shifts in North Korean ideology and altered long standing pillars of Pyongyang’s diplomatic decision making. 

Russian President Vladimir Putin’s current visit to Pyongyang is a manifestation of these changes and is likely to lead to closer ties between Russia and North Korea.

In early January, Kim Jong Un declared South Korea a hostile foreign enemy and essentially renounced the regime’s longstanding policy of seeking “peaceful reunification” with the South. Following this declaration, North Korean organizations dedicated to inter-Korean relations and symbols representing inter-Korean kinship have been systematically dismantled. Most prominently, the Arch of Reunification in Pyongyang was destroyed after Kim Jong Un called it an “eyesore” in state media.

These sweeping changes to Pyongyang’s inter-Korean policy are largely pragmatic. Gone are the days of hoping for a people’s uprising driven by South Koreans yearning to live under the Kim family regime. Kim Jong Un recognizes that South Korea, with its strong economy and robust cultural exports, is not ripe terrain for a socialist revolution.

Nonetheless, in a hereditary dictatorship that emphasizes ideological loyalty above all else, these drastic shifts are revelatory of a changed North Korean grand strategy. They signify a return to foundational principles of maintaining equidistance between Moscow and Beijing, with an eye toward establishing North Korea as a formidable global player and a move away from the hand to mouth diplomacy of the last 30 years.

Kim Jong Un’s new direction on inter-Korean policy, taking advantage of opportunistic developments in global affairs, and continuing to devote his country’s meager resources into strategic weapons development suggest the United States and the West need to give more credit to the North Korean leader, who is often characterized a meme or a laughing stock. Kim has shown that he is capable of growth and is able to shift his strategic thinking. Unfortunately, the U.S. policy on North Korea has not displayed similar flexibility or growth.

Lessons From the International Situation

The pandemic revealed to Kim Jong Un that Pyongyang should not be overly reliant on China and that self-imposed isolation can benefit ideological cohesion within North Korea through the reduction of foreign influence. After the outbreak of COVID-19 in Wuhan, China, the North Korean regime imposed strict border closures. During the global pandemic, Pyongyang reestablished an anachronistic commitment to autarky and anti-globalization. 

These border closures did not collapse the North Korean economy but rather reinforced principles and values from the days of North Korea’s founding leader Kim Il Sung about the importance of being self-sufficient and self-reliant. In April 2020, North Korea’s Workers’ Party leadership “ reaffirmed that it is the firm political line of our Party to build a powerful socialist nation under the uplifted banner of self-reliance.”

On one hand, COVID-19 revealed the poor handling of an international crisis by China, North Korea’s main ally. It also posed a major biological threat to the North Korean political elite – including the “Supreme Leader,” who is obese and a heavy smoker, both of which increase the risk of death from COVID-19.

On the other hand, the Kim regime’s self-imposed isolation from the pandemic disclosed the limits of international sanctions on its national economy and the self-serving benefits of cutting its population off from external “ideological pollution.” The pandemic proved to be a good opportunity for North Korean security services to clamp down on foreign influence within the country, namely South Korean dramas and movies. The complete cutting of ties with its southern brethren is a way to protect the ideological legitimacy of the regime and eliminate the potential of an alternative governance structure within North Korea.

While COVID-19 was a double-edged sword, Russia’s illegal invasion of Ukraine has been a blessing to the North Korean leadership. It revealed that the international community is weak when it comes to confronting nuclear threats and, more importantly, it was a pathway to reducing Pyongyang’s politico-economic dependence on Beijing. The Russian military is desperate for artillery shells and short-range ballistic missiles. North Korea’s Soviet-era artillery stockpiles are a much needed boost to Russia’s war machine.

This rejuvenated North Korea-Russia military partnership provides an outlet for two heavily sanctioned governments. Russia obtains much needed arms and ammunition from North Korea amid a global blacklisting of the Russian defense industry. Russian payment for North Korean armaments likely comes in the form of much needed hard currency and technical cooperation on dual use technology. 

An added bonus is that North Korean military generals can see how their weapons function on a 21st century battlefield, which could lead to changes to the regime’s strategic calculus should a military conflict suddenly erupt on the Korean Peninsula.

American Staleness and the Anti-U.S. Axis

The American playbook on North Korea has not changed in decades. While the United States has offered full-throated support for Ukraine, militarily and politically, Washington still tries its best to put North Korea on the backburner. A North Korean provocation – even when it results in loss of lives – is followed by a U.S. condemnation, vague comments about “all options being on the table,” and a U.S. Navy ship visit or an Air Force flyover.

All options are not on the table; we know it, and the North Koreans know it. These actions do not deter the North. Instead of being further isolated, Pyongyang has now found kindred spirits in a revanchist Russia and a revisionist great power in China.

North Korea’s relationships with Russia and China have had their ups and downs, from the founding of the country as a Soviet-client state to Pyongyang trying to squeeze Moscow and Beijing simultaneously for aid during the Sino-Soviet rivalry of the 1960s and the 1970s. In the 1990s and the 2000s, as Russia and China emphasized economic growth and attracting international investment, North Korea focused on developing nuclear weapons and the means to deliver them. During the Six Party Talks and in discussions with U.S. officials, Russian and Chinese officials could barely hide their contempt for the Kim family regime, even as they tried to defend them in front of the United States and its allies.

There is no deep shared love between Kim, Putin, and China’s Xi Jinping; instead, they are united in opposition to the United States and the Western liberal order that has emerged victorious from the ashes of the Cold War. For now, deep animosity toward the United States and a common goal to frustrate U.S. designs has united the three leaders and elevated Kim Jong Un’s status as a member of a global anti-U.S. triumvirate on equal footing with Russia and China. North Korea has not had this level of relevance on the international stage since the 1960s and the 1970s.

The United States is not alone in this competition, but the window of opportunity to exert strategic influence is limited. Washington has a rare opportunity to work with two like minded allies in South Korea and Japan, led by President Yoon Suk-yeol and Prime Minister Kishida Fumio, respectively. This will not last. The South Korean presidential election in 2027 could easily bring the progressive party back in power. Progressive leaders typically suspect US motives, focus on improving relations with the North, and want to remain neutral in a China-U.S. competition. The presidential administration in Seoul could quickly switch from a staunch U.S. ally to North Korea’s defender and advocate in three years time.

Breaking the Chain

The United States historically has focused on “calming the situation” and “de-escalation” in Korea. Instead, the policy debate and focus on Asia needs to shift toward going on the strategic offensive and gaining escalation dominance. 

Unfortunately, there is no shortage of conflict and crisis in the world. Both headlines and Washington’s attention are currently focused on Ukraine and Israel. Faced with a polycrisis world, U.S. policymakers are faced with the unenviable job of prioritizing the threats to the United States. That said, prioritization can no longer be used as an excuse for Washington and its allies to keep reaching into the well-worn bag of carrots and sticks in regards to North Korea.

Even if North Korea cannot be an immediate priority, Asia-focused policymakers from the National Security Council to the Departments of State and Defense, as well as in the military’s Indo-Pacific Command, can still lay out a clear strategic vision for how the U.S. and its allies can counter Kim Jong Un and his rekindled friendship with Putin and communicate this vision to stakeholders. That vision will require using all the tools in the U.S. toolkit.

Given North Korea’s stranglehold on its information ecosystem, disseminating outside information to the North Korean public through whatever means available should be at the forefront of this toolkit. The U.S and its allies should go on an ideological offensive against the North Korean regime.

Former senior U.S. official Michael Vickers in his recent memoir noted that President Ronald Reagan’s national security policy directive to use “all available means” to defeat the Soviets in Afghanistan was a watershed moment in U.S. policy, helping the Mujahideen to tip the scale on the battlefield. The time has come for the current and future U.S. administrations to step up to this “all available means” approach and break the weakest link in the anti-U.S. triumvirate chain, North Korea.

North Korea’s Kim Hails Russia Ties as Putin Reportedly Plans a Visit

North Korea’s Kim Hails Russia Ties as Putin Reportedly Plans a Visit

By hyung-jin kim.

3 Takeaways From the North Korea-Russia Summit

3 Takeaways From the North Korea-Russia Summit

By takahashi kosuke.

What Do North Korea and Russia Want From Each Other?

What Do North Korea and Russia Want From Each Other?

By hyung-jin kim and kim tong-hyung.

How the Travis King Saga Relates to the North Korea-Russia Summit 

How the Travis King Saga Relates to the North Korea-Russia Summit 

By seong-hyon lee.

In Bid to Be Major Global Player, North Korea Signs New Treaty With Russia

In Bid to Be Major Global Player, North Korea Signs New Treaty With Russia

By shannon tiezzi.

Is China Souring on Pakistan?

Is China Souring on Pakistan?

By eram ashraf.

The Burning of Buthidaung: Allegations, Denials, and Silence in Myanmar’s Rakhine State

By Naw Theresa

China’s 6th Generation and Upcoming Combat Aircraft: 2024 Update

By Rick Joe

Five Decades On, Cambodia Is Taking Ownership of Its Troubled Past

By Peter Maguire

Myanmar’s Conflict Reaches the Doorstep of Bangladesh’s Saint Martin’s Island

By Saqlain Rizve

China, Taiwan, and the Future of Guatemala

By R. Evan Ellis

What Will Modi 3.0 Mean for China-India Relations?

By Scott N. Romaniuk and Khandakar Tahmid Rezwan

  • Share full article

Advertisement

Supported by

Strongest U.S. Challenge to Big Tech’s Power Nears Climax in Google Trial

The first tech monopoly trial of the modern internet era is concluding. The judge’s ruling is likely to set a precedent for other attempts to rein in the tech giants that hold sway over information, social interaction and commerce.

A beige building with the words “E. Barrett Prettyman Court House” etched over three doors.

By David McCabe

David McCabe has been covering the Google antitrust trial from Washington, D.C.

The biggest U.S. challenge so far to the vast power of today’s tech giants is nearing its conclusion.

Starting Thursday, lawyers for the Justice Department, state attorneys general and Google delivered their final arguments in a yearslong case — U.S. et al. v. Google — over whether the tech giant broke federal antitrust laws to maintain its online search dominance. Arguments are scheduled to conclude Friday.

The government claims that Google competed unfairly when it paid Apple and other companies billions of dollars to automatically handle searches on smartphones and web browsers. Google insists that consumers use its search engine because it is the best product.

In the coming weeks or months, the judge who has overseen the trial in U.S. District Court for the District of Columbia, Amit P. Mehta , will deliver a ruling that could change the way Google does business or even break up the company — or absolve the tech giant completely. Many antitrust experts expect he will land somewhere in the middle, ruling only some of Google’s tactics out of bounds.

The trial is the biggest challenge to date to the vast power of today’s tech giants, which have defined an era when billions of people around the world depend on their products for information, social interaction and commerce. American regulators have also sued Apple, Amazon and Meta in recent years for monopolistic behavior, and Google’s case is likely to set a legal precedent for the group.

“This will be the most important decision and the most important antitrust trial of the 21st century,” said Rebecca Haw Allensworth, a professor at Vanderbilt Law School who studies antitrust. “It’s the first of the major monopolization cases against the major tech platforms to go to trial, and so that makes it a bellwether.”

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit and  log into  your Times account, or  subscribe  for all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?  Log in .

Want all of The Times?  Subscribe .

COMMENTS

  1. Here's how technology has changed the world since 2000

    Since the dotcom bubble burst back in 2000, technology has radically transformed our societies and our daily lives. From smartphones to social media and healthcare, here's a brief history of the 21st century's technological revolution. Just over 20 years ago, the dotcom bubble burst, causing the stocks of many tech firms to tumble.

  2. How Is Technology Changing the World, and How Should the World Change

    This growing complexity makes it more difficult than ever—and more imperative than ever—for scholars to probe how technological advancements are altering life around the world in both positive and negative ways and what social, political, and legal tools are needed to help shape the development and design of technology in beneficial directions.

  3. Technology: 21 of the most important inventions of the 21st century

    3D printing, E-cigarettes among the most important inventions of the 21st century. Angelo Young and Michael B. Sauter. 24/7 Wall Street. 0:05. 0:41. The human race has always innovated, and in a ...

  4. Technology over the long run: zoom out to see how dramatically the

    The big visualization offers a long-term perspective on the history of technology. 1. The timeline begins at the center of the spiral. The first use of stone tools, 3.4 million years ago, marks the beginning of this history of technology. 2 Each turn of the spiral represents 200,000 years of

  5. How artificial intelligence is transforming the world

    promote new models of digital education and AI workforce development so employees have the skills needed in the 21 st-century economy, create a federal AI advisory committee to make policy ...

  6. (PDF) THE IMPACT OF TECHNOLOGY ON HIGHER EDUCATION IN THE 21 st CENTURY

    While technology evolves rapidly in the 21st century, higher education institutions sometimes struggle to align with the changing requirements of the labor market (Karakolis et al., 2022). The ubiquity of information and communication technologies (ICTs) has significantly changed the procedures and courses in higher education institutions ...

  7. Emerging technologies and the future of humanity

    Abstract. Emerging technologies are not the danger. Failure of human imagination, optimism, energy, and creativity is the danger. Why the future doesn't need us: Our most powerful 21st-century technologies—robotics, genetic engineering, and nanotech—are threatening to make humans an endangered species.

  8. Science, technology and innovation in a 21st century context

    Science, technology and innovation in a 21st century context. This editorial essay was prepared by John H. "Jack" Marburger for a workshop on the "science of science and innovation policy" held in 2009 that was the basis for this special issue. It is published posthumously. Linking the words "science," "technology," and ...

  9. PDF Science, technology and innovation in a 21st century context

    Science, technology and innovation in a 21st century context John H. Marburger III Springer Science+Business Media, LLC. 2011 This editorial essay was prepared by John H. ''Jack'' Marburger for a workshop on the ''science of science and innovation policy'' held in 2009 that was the basis for this special issue.

  10. Technology Essay for Students in English

    Essay on Technology. The word "technology" and its uses have immensely changed since the 20th century, and with time, it has continued to evolve ever since. We are living in a world driven by technology. The advancement of technology has played an important role in the development of human civilization, along with cultural changes.

  11. Technology in the 21st century: New challenges and opportunities

    Technology and big data in the 21st century. The term "big data" refers to the extremely large amount of structured, ... The surveyed literature was comprised of English-written peer-reviewed papers on big-data-related topics covering a 16-year period from 2000 to 2015.

  12. History of technology

    Table of Contents Technology from 1900 to 1945. Recent history is notoriously difficult to write, because of the mass of material and the problem of distinguishing the significant from the insignificant among events that have virtually the power of contemporary experience. In respect to the recent history of technology, however, one fact stands out clearly: despite the immense achievements of ...

  13. Information technologies of 21st century and their impact on ...

    Twenty first century has witnessed emergence of some ground breaking information technologies that have revolutionised our way of life. The revolution began late in 20th century with the arrival of internet in 1995, which has given rise to methods, tools and gadgets having astonishing applications in all academic disciplines and business sectors. In this article we shall provide a design of a ...

  14. Technology In 21st Century Essay Sample 2023

    The paper views technology in the 21st century. Technology has played a crucial role towards enhancement of globalization in the 21st century. Globalization had huge impacts on the economic world, through an array of merits and demerits arising from globalization acts. New technological trends have played a fundamental role in making a rapid ...

  15. On Reflection: An Essay on Technology, Education, and the Status of

    On Reflection: An Essay on Technology, Education, and the Status of Thought in the 21st Century. Ellen Rose. Canadian Scholars' Press, 2013 - Education - 170 pages. Ellen Rose seeks to initiate a much-needed discussion about what reflection is and should be. The word crops up repeatedly in the discourse of teaching and learning, but its ...

  16. The impact of technology on work in the twenty-first century: exploring

    The twenty-first century has seen significant expansion in the use and availability of technology, which has created a paradigm shift in how we can work. The papers in this special issue explore different facets of the smart and dark side of technology and how new waves of technology also lead to significant changes in the way we work.

  17. 21st Century Communication Technology

    Get a custom Essay on 21st Century Communication Technology. The most common forms of technology that have been used over the period of time for communication in a company pertain to face to face communication, memos, letters, bulletin boards as well as financial reports. The selection of type of media is based on the purpose of the ...

  18. The computer for the 21st century: present security & privacy

    Decades went by since Mark Weiser published his influential work on the computer of the 21st century. Over the years, some of the UbiComp features presented in that paper have been gradually adopted by industry players in the technology market. While this technological evolution resulted in many benefits to our society, it has also posed, along the way, countless challenges that we have yet to ...

  19. Technology In The 21st Century Essay

    Technology In The 21st Century Essay. Over the past centuries there had been changing in the social, economic and even political aspects of the world but when the 21st century or also known as Industrial Age came in, the changes became more common because of the development of technology. In addition, due to the wide developments of technology ...

  20. The role of digital technologies in 21st century learning

    Collaborative learning model Collaboration is a 21st-century trend that shifts the focus from teacher-centred or lecture-based learning to a model that seeks greater student participation (Scott, 2015). In this model, learners work in groups to either seek joint understanding, solutions, and meanings, or to create a common product (Barkley et ...

  21. Essay: Technology in the 21st Century

    Thulitoots n-cube. Technology occupies an important role in the 21st century. Modern advancements have made life on earth much easier for the human race. In the last one-hundred years, progress in astrophysics has resulted in man's ability to give an approximation of the age of the universe. Advances in the medical sciences have extended the ...

  22. Essay on Life In 21st Century

    250 Words Essay on Life In 21st Century The 21st Century Lifestyle. The 21st century is a time of rapid change and progress. We live in a world where technology is at our fingertips, making our lives easier and more comfortable. ... One of the most important parts of life in the 21st century is technology. It has made our lives easier in many ways.

  23. Strategies for Integrating Technology into Physical Education Pedagogy

    DOI: 10.60027/ijsasr.2024.4088 Corpus ID: 270373129; Strategies for Integrating Technology into Physical Education Pedagogy in the 21st Century @article{Li2024StrategiesFI, title={Strategies for Integrating Technology into Physical Education Pedagogy in the 21st Century}, author={Zhaoxi Li and Saknasan Jintasakul and Prakit Hongsaenyatham}, journal={International Journal of Sociologies and ...

  24. Editorial: Use the available technology to verify signatures

    Michigan election officials should explore that technology, and any other options that would bring signature checking into the 21st century. Some consider sampling unconstitutional, arguing that ...

  25. Solar Power's Giants Are Providing More Energy Than Big Oil

    Seven Chinese companies have a bigger stake in the energy of the 21st century than the Seven Sisters of oil that dominated the 20th. June 13, 2024 at 4:00 PM EDT By David Fickling

  26. Putin's Visit Symbolizes North Korea's Changing Foreign Policy

    An added bonus is that North Korean military generals can see how their weapons function on a 21st century battlefield, which could lead to changes to the regime's strategic calculus should a ...

  27. Google Antitrust Trial Concludes With Closing Arguments

    The first tech monopoly trial of the modern internet era is concluding. The judge's ruling is likely to set a precedent for other attempts to rein in the tech giants that hold sway over ...