Find schools
When you click on a sponsoring school or program advertised on our site, or fill out a form to request information from a sponsoring school, we may earn a commission. View our advertising disclosure for more details.
When you click on a sponsoring school or program advertised on our site, or fill out a form to request information from a sponsoring school, we may earn a commission. View our advertising disclosure for more details.
The industrial internet of things (IoT) holds a lot of promise, but it’s been held back, in part, by the limited energy supply of sensors and the short lifetimes of their batteries. That’s finally changing. Everactive, a startup, has developed industrial sensors that can run nonstop, with minimal maintenance, for over 20 years.
Engineers at Everactive solved the problem of limited-lifetime batteries in sensors by removing the batteries entirely. Instead, Everactive’s integrated circuits draw power from vibrations and indoor light and transmit their captured data to cloud servers. They cost significantly less than traditional sensors.
Factories are the first to benefit, but more applications are coming as the next generation of sensors gets built. Everactive’s engineers believe they’re only a few years away from producing translucent, flexible sensors that are no bigger than the size of a postage stamp: all customers will need to do is affix a sensor to begin generating data. Industry leaders, and computer engineers, will happily have their hands full, trying to make use of this boon of connected information.
Built by OpenAI, GPT-3 could be described as the most powerful form of autocomplete ever invented. The third generation of the Generative Pre-Trained Transformer (GPT), its next-generation abilities are built upon 175 billion parameters. Users have leveraged GPT-3 to create question-based search engines, generate programming code, compose music, and respond to medical queries. GPT-3 can even produce convincing imitations of fiction, blog posts, and poetry.
It’s not perfect. For every astounding success, GPT-3 produces several obvious mistakes. Part of this is due to the vast swathe of perfect and imperfect data it’s trained on: the result can look like an infinite number of monkeys tapping on an infinite number of typewriters.
But, if applied correctly, GPT-3 could automate a huge number of processes, revolutionize computer programming, and lead to important breakthroughs in AI research. What GPT-3 needs now, according to a source at Google, is hard engineering layered on top of it. Going forward, computer engineers will be looking at more ways to transfer the potential of AI-enabled software like GPT-3 into tangible benefits.
The Covid-19 pandemic caused a massive shift to remote work for many white-collar professions. But it also exposed a lot of the weaknesses in the existing tech that powers remote work. So instead of relying on video conferencing and virtual private networks, computer engineers are reimagining what remote work could look like if it were to be prioritized and developed going into the future.
Virtual reality (VR) and augmented reality (AR) are prime candidates for evolving the concept of remote work. By giving users a virtual experience that’s as visual and immersive as a physical one, computer engineers hope to facilitate collaboration, education, and spontaneity. The vision is a transition to a hybrid work model in white-collar industries, and experts across the business world are buying in: Facebook already has working prototypes, and PwC projects the VR industry to grow exponentially over the coming years.
VR and AR tech are also being considered for use in musical collaborations through initiatives like Jamkazam and JackTrip. Computer engineers are working with software engineers to drop latency low enough to support virtual rehearsals and augmented performances. This would open up the potential for larger, more inclusive audiences while also building new business models based around virtual (rather than physical) events. With VR and AR, computer engineers will increasingly merge the physical and non-physical worlds.
In 2018, autonomous ridesharing services launched in four Phoenix suburbs. But three years on, there’s still a lot of work to do to bring self-driving cars into the mainstream. Enabling vehicles with high levels of autonomy requires meeting strict requirements around safety and reliability. Computer engineers must design vehicles with high-powered sensors, intuitive human-computer interaction, and backups on-top of backups.
The rewards are large, with advocates pointing out that 90 percent of serious car accidents can be blamed on human error. But self-driving cars, which promise to reduce congestion and improve efficiency, aren’t without their faults either: in 2018, a self-driving car killed a pedestrian. Since then, some of the hype around self-driving cars has understandably faded. But the solution, it seems, is smaller steps: Automated Lane Keeping Systems (ALKS), which are now being rolled out in some everyday cars, could prevent an estimated 47,000 serious accidents over the next decade.
As each individual component gets tested, iterated, and innovated upon, self-driving cars inch closer to the commercial world. Industry heavyweights like GM and Microsoft are banking on it happening sooner rather than later. All computer engineers need to do is keep their eyes on the road.
At an average distance of 140 million miles, the trip to Mars is orders of magnitude more complex than a mission to Earth’s moon. The engineers at SpaceX envision a six-month round-trip journey, with the Starship launch vehicle refueling both in Earth orbit and on the surface of Mars. Capable of carrying over 100 metric tons and running on three reusable methalox staged-combustion Raptor engines, Starship is the most powerful launch vehicle ever developed.
If the future of humanity is indeed multi-planetary, computer engineers will be a key part of the team that takes us there. They design forms of alternative propulsion, new instruments for detecting subsurface water, human-computer interfaces for human space travel, and semi-autonomous drones and probes for exploration. Each step involves complex theoretical problems with no room for error.
NASA’s Perseverance rover landed successfully on Mars on February 18, 2021, with a unique passenger: Ingenuity, the Mars Helicopter. NASA’s computer engineers in Southern California built Ingenuity to fly in atmospheric conditions specific to the red planet. But it carries no scientific instruments on-board: its objective is solely an engineering one, to demonstrate the possibility of rotorcraft flight in the Martian atmosphere.
Due to the delays inherent in interplanetary communication, Ingenuity has been programmed to pilot itself, and even to make its own decisions. Back on Earth, computer engineers will be watching and waiting.
In the early 21st century, Major League Baseball’s low-budget team, the Oakland A’s, managed to build a roster capable of competing with its better-financed rivals. They did it with data. This was achieved through an approach now known as sabermetrics, where the Oakland A’s made their staffing decisions based on then-obscure data points. Instead of focusing on the traditional measures of the past (e.g., a player’s pitching speed or stolen base percentage), the team’s management dove deeper into newly recorded statistics and uncovered the ones they thought really mattered. The result wasn’t just a better team for the Oakland A’s—it was a revolution in the way baseball franchises do business.
Big Data gives computer engineers more information than ever to power their decisions. With practically infinite data points available, the trick is knowing what questions to ask. But this isn’t just for games and businesses anymore; it’s also for the social good. Innovators are using algorithmic sorting and sabermetrics to tackle inequality, improve hiring practices, and stem the flow of misinformation.
Rediet Abebe, a PhD candidate in computer science at Cornell University, pioneered a new method of algorithmic sorting that seeks to bridge gaps in the delivery of resources to disadvantaged communities. As an intern at Microsoft, she developed an AI project that sought to identify unmet health needs in Africa by scanning people’s search queries. Abebe designed her algorithms to identify which demographics were prone to seek out information about HIV stigma, HIV discrimination, and natural HIV cures, for example. In doing so, she unlocked parts of the population that needed help but weren’t receiving it.
Abebe’s project expanded to all 54 African nations and harvested web-only data to identify those most likely to be in need of support. And now she’s bringing that project to the US, working with the National Institutes of Health’s Advisory Committee to tackle health disparities in America.
In another example, two computer science and engineering undergraduates at the University of Nebraska, Vy Doan and Eric Le, are unleashing the power of algorithms in the battle against misinformation. Recognizing that humans are prone to confirmation bias, Doan and Le have developed a machine-learning algorithm that can identify questionable news all on its own. By having a system pour over years’ worth of Twitter posts, Doan and Le were able to discern the data points that mattered in spotting misinformation: location, account age, and frequency of posts. Once a system of detection was in place, they set about programming a browser extension that could caution users about unreliable sources of information.
Big Data isn’t new anymore. As it reaches maturity, the question for computer engineers isn’t how much data they have, but which data to pay attention to, and to what end.
As technology continues to progress, computer engineers are exploring an increasing number of ways to physically connect humans to it. Brain-Computer Interfaces (BCIs) do that in a startlingly literal way. By linking the brain with today’s hardware, BCIs have the potential to super-charge human evolution to its next stage.
It’s estimated that over a quarter of Americans suffer from brain disorders. That may sound like the set-up to a really bad joke, but in reality, the punchline is much dourer: these disorders manifest as post-traumatic stress, as restricted mobility, and as memory issues like Alzheimer’s. In 2013, President Obama announced the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) initiative, which aimed to study the way the brain works as a piece of technology and how technology could best interact with it.
So far, the results have been fruitful. Preclinical studies have shown how brain cells combine to process emotions. Non-invasive ultrasonic technology may allow the release of medication into specific areas of the brain. Adaptable electrical stimulation devices can be used to treat movement disorders. This is a revelation in the way we treat issues of the brain: not just with diffused chemicals, but with electric connectivity.
The public sector isn’t the only one gaining ground in this field: Elon Musk has his own BCI company, Neuralink, a 100-person startup that’s developing data transmission systems between people and computers. Founded in 2017, it’s only now publicizing some of its progress: recording a rat’s brain activity through thousands of electrodes implanted along its neurons and synapses. Musk has also alluded to a successful BCI placement in a primate, which allowed the animal to control a computer with its thoughts.
The next step, which will seek FDA approval as early as 2021, is clinical human trials. The aim of these trials will be to insert those electrodes into paralyzed patients and give them the ability to control electronic devices with their thoughts. Other companies like Kernel and CTRL-labs are following suit.
BCI advocates point to applications in the fight against Parkinson’s disease, epilepsy, and blindness. Regulators and those overly familiar with dystopian science fiction, remain leery. Fortunately for all, the next beneficiaries of BCI technology will likely be only those battling brain disorders. The dreaded cyborg army will have to wait.
One way to avoid the regulatory hurdles of BCIs is to ditch the human subject and simply build an entirely new brain. Neuromorphic computing seeks to engineer machines that can mimic the function of the human brain in both hardware and software and it’s gaining traction going into 2021.
From humble beginnings in the 1980s, neuromorphic computing took a big step forward in 2017, when Intel unveiled the Lohi neuromorphic processor, a self-learning chip which mimics brain functions by adapting to feedback from the observed environment. The Lohi chip is extremely energy-efficient, using recorded data to draw inferences and get smarter over time. And it’s high-powered, too: neuromorphic hardware excels in what have previously been seen as human-dominant areas like kinesthetics (prosthetic limbs) and visual recognition (pattern sorting).
In 2019, Intel integrated 64 Lohi chips into a single, large-scale neuromorphic system called Pohoiki Beach. That turned the hardware equivalent of 130,000 neuron analogs into 8,000,000. To put that in more graspable terms: a single Lohi chip has half the neural capacity of a fruitfly, while the Pohoiki Beach system has the neural capacity of a zebrafish. The most impressive part of this isn’t the current state but rather where it’s going. The Lohi chips consume 100 times less power than Graphic Processing Units (GPUs) and five times less power than dedicated IoT inference hardware, meaning that Intel can scale up to about 50 times its current capacity and still retain better performance than its peers.
Next year, Intel promises the unveiling of an even larger neuromorphic system, nicknamed Pohoiki Springs. Competitors like Samsung and IBM are already taking notice and developing their own projects. It will be a long time before the pseudo-brains of neuromorphics match or exceed the size and capacity of the engineers creating them. But the interim should see an uptick in more efficient—and more humanesque—computational power.
Quantum computing exists, but it also doesn’t. It’s kind of in-between. And that apparent paradox is the driving force of one of computer engineering’s most exciting possibilities.
Where traditional computing consists of bits coded as zeroes and ones, quantum computing replaces those with qubits that exist in a state of superposition, meaning they can act as both zeroes and ones simultaneously. If quantum computing can be scaled up, it could quickly solve problems that traditional computing technology would take years or even centuries to process. Paradigm shifts in finance, medicine, and IT would follow. The applications are practically limitless, and, to the uninitiated eye, the results could look like magic.
In March 2018, Google’s Quantum AI Lab unveiled its 72-qubit processor, named Bristlecone. This was a critical step towards quantum supremacy: the moment that a quantum computer begins to outperform traditional supercomputers. But this is the quantum world, where nothing is linear and it’s not just about processing power. Quantum computers are prone to errors and achieving quantum supremacy requires not just raw power, but low error rates to go with it. Quantum supremacy has been illusory since the idea was first introduced, with some doubting it was even theoretically possible. But in 2019, Google discovered it was closer than anybody thought.
With each new improvement to Google’s quantum chips, there’s been a growth in power unlike anything else in nature. While traditional computing power has grown at an exponential rate (in accordance with Moore’s Law), Google’s quantum computing power is growing at a doubly-exponential rate. If such a trend continues, practical quantum computing could arrive in the next year. People have mapped out use cases for everything from better drugs, to better batteries, to new forms of AI, to entirely new materials.
Whether Google’s quantum technology can continue its rapid ascent and scale effectively remains in doubt. But the industry isn’t taking any chances. Researchers are already exploring ways to redesign critical digital infrastructure, such as encryption, for a post-quantum world. Heavyweights like IBM and Intel are charging ahead with their own quantum devices.
They might get there in 2021, or they might not. Alternatively, in true quantum fashion, they might both get there and not get there simultaneously. In any event, it’ll be intriguing days ahead for computer engineering.
Simply put, the internet of things (IoT) allows technological devices to talk to each other. This has some innocuous applications such as your thermostat regulating itself or your refrigerator detecting you’re out of milk and ordering you some more from Amazon. But it also makes it possible for businesses to optimize their supply chains—for inanimate objects to be turned into spigots of valuable data and for cars to drive themselves (not to mention for ships to sail themselves and even planes to fly themselves). Unlocking the full potential of IoT devices has been a Holy Grail for computer engineers for years, but the biggest hurdles to IoT’s growth—less than real-time data access, limited bandwidth, and outdated operating systems—are set to be jumped in 2021.
IoT devices produce a massive amount of data, and that data needs to be processed through data centers. Managing the flow of information across cloud servers reduces speeds to less than real-time, which can be critical for the more innovative applications of IoT. Edge computing solves this problem by bringing computation and data storage closer to the location where it’s needed, improving processing speeds and preserving bandwidth. With the proliferation of edge nodes on the level of cell-phone towers, edge computing can empower the IoT with real-time communications for autonomous cars, home automation systems, and smart cities.
Bandwidth has long been the bane of IoT developers. The current capabilities of Wi-Fi and 4G simply aren’t able to handle the load of real-time communications necessary for sensors to link together in meaningful ways. But a critical tipping point for IoT developments could come with the dawn of 5G telecom networks. Dozens of cities have already been hooked up and a more comprehensive rollout across the nation is coming into 2021.
Operating systems like Windows and iOS were developed long before the internet of things was in focus, and, as a result, newer IoT applications can feel like a square peg in a round hole. That’s why, in April 2019, Microsoft acquired Express Logic, a real-time operating system (RTOS) for IoT devices and edge computing that’s powered by microcontroller units (MCUs). At the time of the acquisition, the RTOS already had over six billion deployments.
That’s just the beginning. Other industry heavyweights are looking to acquire or develop their own IoT operating systems. Gartner, a research firm, predicted there would be more than 20 billion connected devices in 2020, with over nine billion MCUs deployed annually. The total market for industrial IoT devices is expected to reach over $120 billion in less than a year. That means more and more people are talking about IoT—and more things in the IoT are talking to each other.
Today, digital twins are not limited to just physical objects. With the rise of virtual and augmented reality technologies, digital twins can now replicate entire environments and systems in a virtual space. This has opened up new possibilities for testing and simulation, allowing companies to reduce costs and risks associated with physical prototypes.
Diversity and inclusivity aren’t purely idealistic goals. A growing body of research shows that greater diversity, particularly within executive teams, is closely correlated with greater profitability. Today’s businesses are highly incentivized to identify a diverse pool of top talent, but they’ve still struggled to achieve it. Recent advances in AI could help.
The ability of a computer to learn and problem solve (i.e., machine learning) is what makes AI different from any other major technological advances we’ve seen in the last century. More than simply assisting people with tasks, AI allows the technology to take the reins and improve processes without any help from humans.
This guide, intended for students and working professionals interested in entering the nascent field of automotive cybersecurity, describes some of the challenges involved in securing web-enabled vehicles, and features a growing number of university programs, companies, and people who are rising to meet those challenges.
Unlike fungible items, which are interchangeable and can be exchanged like-for-like, non-fungible tokens (NFTs) are verifiably unique. Broadly speaking, NFTs take what amounts to a cryptographic signature, ascribe it to a particular digital asset, and then log it on a blockchain’s distributed ledger.
First proposed by computer scientist Nick Szabo in the 1990s and later pioneered by the Ethereum blockchain in 2010, smart contracts are programs that execute themselves when certain predetermined conditions are met.
This is a role for tech-lovers, for logical thinkers, for those who like being given an answer and then are told to find the question. But it’s also a role for communicators, for relationship builders, for people who enjoy cross-departmental collaboration.
The field of digital marketing intersects with many other tech industries and grew out of traditional theories of advertising, marketing, and sales. Just like traditional marketing, the goal is to reach your target customer base, build brand awareness, and make a meaningful, data-generating connection.
Computer science drives the modern world. Its applications help save lives, amplify marginalized voices, and enrich humanity’s understanding of itself. And the capabilities of computer science are only growing: today, the world’s six billion smartphone owners possess, in their pocket, more powerful computers than those that originally sent men to the moon.
October is Cybersecurity Awareness Month, which aims to help individuals protect themselves online. It’s also an opportunity to recognize the important work that cybersecurity professionals do to keep us, our businesses, and our nation’s infrastructure safe.