Over the past three years, a rush of excitement has emerged globally regarding artificial intelligence. In a student’s everyday life, discussions about artificial intelligence arise frequently- whether about the potential benefits of generative AI, using ChatGPT on homework assignments, or seeing AI’s growing presence on social media platforms like TikTok.
Claims that AI holds significant potential in the development of society and technology are impossible to ignore, with AI occupying numerous sectors seen throughout daily life. In fact, when I began writing this article, even clicking enter on a google search titled “Impact of AI on climate change” immediately caused an AI overview to pop up unprompted.
AI generated images / The Economic Times India
While the environmental repercussions of AI usage cannot be ignored, to deny the multitude of potential benefits from artificial intelligence would be absurd. Instead, it makes more sense that the use of (mostly generative) AI for recreational purposes is the issue– hundreds of thousands of people contribute to this environmental impact, not realizing that even a short prompt into ChatGPT has been proven by the International Energy Agency to equate to 4-10x the amount of energy that just one Google search consumes.
There are four key problems attributed to why AI can cause widespread harm to our environment. First, the mining required to extract critical minerals and rare earth elements for the microchips that power AI is incredibly destructive to the environments where these resources are found. Navigating New Horizonsconfirms this, stating,
“[The minerals and elements] are often mined unsustainably”.
The second is that AI servers are held in data centers which produce a shocking amount of electronic waste. They also contain hazardous substances such as mercury and lead, according to the United Nations Environment Program (UNEP). This is harmful because when they are (often) disposed of improperly, the wildlife, soil, air, and water around it are contaminated.
Thirdly, these AI data centers use preposterous amounts of electricity and energy, due to advanced technology seen in these models. Therefore, the energy used in most of these data centers comes from fossil fuels which produce greenhouse gases that further contribute to global warming. Research by the University of Nottingham shows that by 2026, AI data centers will likely account for nearly 35% of Ireland’s energy consumption. Added effects to climate change are something that we simply can’t afford currently, with the already increasing rate of rising global temperatures.
Pollution due to Elon Musk’s AI data center in Memphis / NAACP
Finally, and most of all, data centers consume a colossal amount of water, not only to construct but also to cool electrical components of AI. Chilled water absorbs heat from computing equipment. This water does not return to the water cycle; most of it is gone forever when used to cool these heated data centers. The centers use mechanical chillers which carry heat away from the servers, releasing it through a condenser, and so the water becomes water vapor where it does not cycle back through treatment systems like in a typical household. Even though some of it returns as rainfall, a majority of vapor in the air cannot be recovered. Not only this, but data centres are often located near locations which are already prone to droughts, which gives the inhabitants of this area even less access to water. This is a huge problem when a quarter of humanity already lacks access to clean water and sanitation. MIT Newstells us that for every single kilowatt hour of energy a data center consumes, it would need two entire liters of water for cooling. It is an atrocity to restrict so much life from access to clean water and instead use it on generating ‘a cartoon version of me’ or asking ChatGPT to write a quick email that could be written by the individual in just two minutes instead.
The impacts of these contributors on climate change are immense. It also doesn’t help that generative AI models have an extremely short shelf-life as AI companies such as ChatGPT and DeepSeek consistently deliver new models, provoked by rising demand for new AI applications. So, the energy used to train previous models goes to waste every few weeks, and new models use even more energy because they are more advanced than the previous ones. Sure, one person using Perplexity AI doesn’t do much to the environment, but if everyone follows this logic, the large scale of people using AI results in terrible repercussions.
On the other hand, popular articles repeat that because “500ml of water are used for every 20-50 ChatGPT prompts, not every prompt”, the amount of energy that ChatGPT uses is not that significant. However, like govtech.com states, even if 500ml sounds small, combined with the 122 million people who use ChatGPT daily, this is a lot of water that is wasted for purposeless reasons. AI’s energy use has exploded only because AI has exploded. It is not that each prompt uses a significant amount of energy, but that AI has had an explosive growth being the quickest adopted technology ever, therefore the energy adds up to be significant through the sum of people using AI.
As a society, we have to acknowledge that even though AI provides us an abundance of opportunities and ideas for our modern world, we must not forget the consequences to the already declining environment that overuse brings. We should take into consideration that life would most likely not be worse without generative AI for the average person. We should take into consideration that the tradeoff of mindless entertainment and having ChatGPT search for basic facts is worth a better chance at restoring our Earth. And ultimately, we should simply refrain from using AI for recreational reasons unless the purpose is absolutely urgent and necessary.
How Scientists are Using Worms to Learn About Humans
Worms and humans could not possibly be any more different. And yet, scientists have been studying C. elegans (caenorhabditis elegans) to learn more about the human body over 70 years. These unassuming worms have aided in groundbreaking findings in medicine for human diseases such as Alzheimer’s, AIDS, and stroke.
What makes C. elegans so valuable is not its complexity, but rather its simplicity. Because so many of its biological pathways are conserved in humans, this worm provides a uniquewindow into the fundamental processes of life, including cell division, gene regulation, neural signaling, and aging. With a transparent body, rapid life cycle, and a genetic makeup that mirrors much of our own, C. elegans has become an essential organism in modern biomedical research. Understanding how scientists use these worms can help us appreciate not just what we’ve already learned, but also the vast potential that still lies ahead.
What is C. elegans?
C. elegans is a free-living nematode that has become one of the most important model organisms in research. It measures approximately one millimeter in length and naturally lives in temperate soil environments, where it feeds on bacteria like e. coli. It is non-parasitic and exists in two sexes: hermaphrodites, which are capable of self-reproduction, and males, which occur at a less than 0.1% chance under normal conditions. The hermaphroditic reproductive mode allows for the maintenance of isogenic populations, which is advantageous for genetic studies.
The adult C. elegans hermaphrodite has exactly 959 somatic cells while the adult male C. elegans has exactly 1,031 somatic cells. The worm’s relatively simple anatomy includes muscles, a nervous system, a digestive system, a reproductive system, and an excretory system. The organism develops through four larval stages before reaching adulthood, with a complete lifecycle taking just two to three weeks under laboratory conditions.
Genetically, C. elegans has a compact genome consisting of about 100 million base pairs across six chromosomes. It was the first multicellular organism to have its entire genome sequenced in 1998 in a project led by John Sulston and Bob Waterstons. Its genome is highly amenable to manipulation using a variety of modern techniques.
Why do scientists study C. elegans specifically?
First introduced into studies by Sydney Brenner in the 1960s to study neurological development and the nervous system, the nematode proved itself in the lab with its unique combination of genetic, anatomical, and practical features that make it exceptionally suitable for biomedical research.
Remarkably, around 60-70% of human disease-associated genes have counterparts in the C. elegans genome, making it an incredibly valuable model for studying human biology. Many genes responsible for critical cellular functions are evolutionarily conserved between worms and humans. Therefore, scientists can manipulate the function of these genes in C. elegans to study their roles in disease without the complexity or ethical challenges of working with human subjects or higher animals like mice or primates.
Adult hermaphrodites’ cells, which remain the same in every single worm, each of which has been identified and mapped, allowing for detailed tracking of development, differentiation, and cellular processes. Its transparent body enables real-time visualization of internal structures, including neurons, muscles, reproductive organs, and digestive tissues. The worm, which has a simple nervous system of only 302 cells, is one of the only organisms where every neural connection is known. Additionally, C. elegans has a short life cycle of two to three weeks and is easy to culture in large numbers, making it especially convenient for developmental and aging studies.
How do scientists modify C. elegans in experiments?
Scientists modify and study C. elegans using four primary methods: RNA interference (RNAi), CRISPR-Cas9 genome editing, transgenic techniques, and drug screening.
One of the most widely used techniques for modifying gene expression in C. elegans is RNA interference (RNAi). This method allows scientists to silence specific genes to observe the effects of their absence. In C. elegans RNAi can be easily administered by feeding worms with genetically engineered E. coli bacteria that produce double-stranded RNA (dsRNA) matching the gene of interest. Once ingested, the dsRNA activates the worm’s endogenous RNAi pathway, leading to the degradation of the corresponding messaging RNA and a reduction or elimination of the target protein. This method is highly efficient, non-invasive, and relatively easy to perform, making it ideal for large-scale genetic screens. Researchers can identify genes involved in key processes such as embryonic development, aging, metabolism, and neurodegeneration.
The CRISPR-Cas9 system has revolutionized genetic research in C. elegans by enabling precise, targeted alterations to the genome. Scientists introduce a complex composed of the Cas9 enzyme and a guide RNA (gRNA) into the worm, which directs the Cas9 to a specific DNA sequence. Once there, Cas9 introduces a double-strand break in the DNA. The cell’s natural repair mechanisms then fix the break, and researchers can insert, delete, or replace specific DNA sequences. In C. elegans, CRISPR can create mutants mimicking human disease alleles or study regulatory elements of genes. This method provides a level of control that surpasses RNAi, as it allows for permanent and heritable genetic modifications. Scientists often inject the CRISPR-Cas9 components directly into the gonads of adult hermaphrodites, ensuring that the genetic changes are passed onto the offspring.
Transgenic techniques in C. elegans insert foreign DNA into the worm’s genome to monitor gene expression, trace cell lineages, or study protein localization. One common approach is to fuse a gene of interest to a reporter gene such as green fluorescent protein (GFP). When this gene is expressed, the fluorescent tag can be visualized in living worms using fluorescence microscopy. This allows researchers to observe where and when specific genes are active, how proteins move within the cells, and how cells interact during development or disease progression. Transgenes are typically introduced via microinjection into the syncytial gonads of adult worms, leading to the formation of extrachromosomal arrays inherited by the next generation. Stable lines can also be created through CRISPR or chemical integration methods. These visual tools are particularly powerful due to the worm’s transparent body, which makes it possible to track fluorescent signals in real time throughout the entire organism.
C. elegans is an excellent system for drug screening and environmental toxicology due to its small size, short lifespan, and genetic tractability. Researchers can test the effects of thousands of compounds quickly and cost-effectively. In these experiments, worms are exposed to chemical agents in liquid or on agar plates, and their survival, movement, reproduction, or specific cellular markers are measured to assess the biological impact. Using genetically modified strains that mimic human disease pathways, scientists can screen for drugs that alleviate symptoms or restore normal function. These tests provide an efficient first step in drug development, singling out promising candidates before moving onto mammalian models.
The cell lineage and the programmed cell death in C. elegans / Nobel Prize in Physiology or Medicine 2002
One of the most groundbreaking discoveries made using C. elegans was the genetic basis of programmed cell death, or apoptosis, a critical process in both development and disease. The research was led by Dr. H. Robert Horvitz at the Massachusetts Institute of Technology. Horvitz and his colleagues began studying cell death in C. elegans in the 1980s by tracing the fate of every cell in the worm’s body during development. They discovered that exactly 131 cells always die in the developing hermaphrodite and that this process was genetically controlled. Through genetic screening, Horvitz identified three core genes that regulated apoptosis: ced-3, ced-4, and ced-9. By inducing mutations in these genes, the researchers could either prevent or accelerate cell death in the worm. This revealed that cell death is not a passive consequence of damage, but rather an active, genetically programmed event. The mammalian counterparts of these genes, like caspases and BCL-2, were later discovered to play central roles in cancer, autoimmune diseases, and neurodegeneration, making this research foundational to modern medicine. Horvitz was awarded the 2002 Nobel Prize in Physiology or Medicine for his work along with Sydney Brenner and John Sulston.
In addition, C. elegans has contributed to our understanding of neurodegenerative diseases such as Alzheimer’s. One major study was led by Dr. Christopher Link at the University of Colorado in the late 1990s. Link developed a transgenic C. elegans strain that expressed the human β-amyloid (Aβ) peptide in muscle cells. This is the same peptide that forms toxic plaques in the brains of Alzheimer’s patients. In the study, the researchers observed that worms expressing Aβ developed progressive paralysis as they aged, mimicking aspects of human Alzheimer’s pathology. They then used this model to screen for genetic mutations and chemical compounds that could suppress the toxic effects of Aβ. Their work identified several genes involved in protein folding and stress response that modified Aβ toxicity. This demonstrated that C. elegans could be used as a fast and cost-effective in vivo system for identifying genetic and pharmacological modifiers of Alzheimer’s disease. The worm model has since then been adapted by numerous labs worldwide to study tau protein aggregation and mitochondrial dysfunction, expanding our knowledge of neurodegenerative pathways.
Another major discovery made using C. elegans was the link between insulin signaling and lifespan regulation. Dr. Cynthia Kenyon at the University of California, San Francisco, led a series of experiments in the 1990s that transformed the field of aging research. Kenyon’s team discovered that a single mutation in the daf-2 gene, which encodes an insulin/IGF-1 receptor, could double the worm’s lifespan. They found that when daf-2 signaling was reduced, it activated another gene, daf-16, which promoted the expression of stress-resistance and longevity-related genes. To test this, Kenyon used genetic mutants and tracked their development and survival across generations. The C. elegans with the daf-2 mutation lived significantly longer than their wild-type counterparts and were more resistant to oxidative stress and heat. These findings provided the first clear evidence that aging could be actively regulated by specific genetic pathways rather than being a passive deterioration process. Later studies found that similar insulin/IGF-1 pathways exist in mammals, including humans, opening new therapeutic avenues for age-related diseases, diabetes, and metabolic disorders.
So what does the future hold?
The future of C. elegans in scientific research is remarkably promising, with its applications continually expanding as technology and genetic tools advance. With the rise of CRISPR-Cas9, optogenetics, and high-throughout screening techniques, researchers can now manipulate C. elegans with unprecedented precision to study complex biological processes such as epigenetics, gut-brain interactions, and real-time neuronal activity. In the coming years, C. elegans is expected to play an even greater role in personalized medicine and systems biology. Its potential as a predictive model for human gene function could aid in understanding the consequences of mutations found in patient genomes, leading to more tailored treatments. The worm’s short life cycle, fully mapped genome, and conserved biological pathways make it an ideal model for rapidly identifying new therapeutic targets and testing drugs, especially for age-related and neurodegenerative diseases. Despite its simplicity, this tiny nematode continues to open doors to complex human biology, proving that even the smallest organisms can have the biggest impact on science and medicine.
“C. Elegans 101: A White Paper – InVivo Biosystems.” InVivo Biosystems, 26 Jan. 2024, invivobiosystems.com/disease-modeling/c-elegans-101-a-white-paper/.
Edgley, Mark. “What Is Caenorhabditis Elegans and Why Work on It? – Caenorhabditis Genetics Center (CGC) – College of Biological Sciences.” Umn.edu, University of Minnesota, 2022, cgc.umn.edu/what-is-c-elegans.
Venkatesan, Arun, and Krishma Adatia. “Anti-NMDA-Receptor Encephalitis: From Bench to Clinic.” ACS Chemical Neuroscience, vol. 8, no. 12, 7 Nov. 2017, pp. 2586–2595, https://doi.org/10.1021/acschemneuro.7b00319.
Wheelan, Sarah J, et al. “Human and Nematode Orthologs — Lessons from the Analysis of 1800 Human Genes and the Proteome of Caenorhabditis Elegans.” Gene, vol. 238, no. 1, Sept. 1999, pp. 163–170, https://doi.org/10.1016/s0378-1119(99)00298-x.
“Whitehead Institute of MIT.” Whitehead Institute of MIT, wi.mit.edu/unusual-labmates-how-c-elegans-wormed-its-way-science-stardom.
“Wonderous Worms.” NIH News in Health, 3 July 2025, newsinhealth.nih.gov/2025/07/wonderous-worms. Accessed 1 Aug. 2025.
Zhang, Siwen, et al. “Caenorhabditis Elegans as a Useful Model for Studying Aging Mutations.” Frontiers in Endocrinology, vol. 11, 5 Oct. 2020, https://doi.org/10.3389/fendo.2020.554994.
As someone born and raised in New York City (NYC), I can attest to the urgent need to upgrade the city’s climate control infrastructure. Current systems are outdated and hinder the city’s ability to meet emissions goals and address global warming; the encapsulation of this problem is the boiler. A staggering 72.9% of heating in NYC comes from fossil-fuel-burning steam boilers, one of the most carbon-intensive options available. Tenants of apartments pay for the maintenance of centralized boilers without control over the temperature, leading many to open their windows in winter to release excessive warmth. This heat and the fossil fuels used to produce it are wasted, highlighting the inefficiency and impracticality of NYC’s existing infrastructure.
Even when this heat remains indoors, steam boilers are only about 80-85% efficient at burning fossil fuels. Up to a fifth of a boiler’s fuel does not generate usable heat, but burning it still releases vast quantities of pollutants like CO2, exacerbating climate change. Furthermore, boilers continue to lose efficiency during their lifetimes and require frequent maintenance and replacement. While steam boiler systems were revolutionary in the 19th century, they may now become obsolete as NYC implements a technology that could change how the world thinks about climate control.
The innovation behind heat pumps comes from the mantra of use what is given; instead of generating heat through combustion, they simply move existing warmth between two places. Most of these fully-electric pumps remain functional well below 0℃, even though it may seem like there is no warmth to be moved. This operative capacity allows them to have heating efficiencies of 300-500%! Because of this, International Energy Agency partner Yannick Monschauer estimates that “Heat pumps could bring down global CO2 emissions by half a gigaton by the end of this decade.”
Heat pumps work by operating on the Second Law of Thermodynamics (SLOT), which states that heat will move from a hotter object to a colder one. In the wintertime, the pumps pull in outdoor air and blow it over fluids (called refrigerants) held in a closed-loop system. The air transfers warmth to the cold refrigerants through SLOT, and the heated fluids turn into gas. Heat pumps can work in freezing temperatures because these refrigerants have such unusually low boiling points, allowing them to vaporize easily; one of them, Refrigerant 12, has a boiling point of just -21.64°F!
The hot, gaseous refrigerants move into a compressor that compacts their molecules, making them even warmer. They then flow through a coil that exposes them to indoor air, and the refrigerants release their warmth inside through SLOT. As the refrigerants cool, they condense back into liquid and pass through an expansion valve, decreasing their temperature further. They move to an outdoor coil and are ready to restart the process, continuing to warm cold homes during the winter.
Even more significantly, heat pumps have reversing valves that switch the flow of their refrigerants. These valves allow the pumps to cool homes by pushing out warm, indoor air in the summertime. Thus, heat pumps make air conditioners, boilers, radiators, and related piping unnecessary, freeing space and alleviating material and labour costs that typically get passed on to homeowners.
Heat pumps in NYC
In 2024, NYC pledged to have heat pumps provide 65% of residential heating, air conditioning, and water-heating needs by 2030. This shift would drastically reduce the city’s carbon emissions from the climate control sector, which contributed to 10% of global energy-related CO2 emissions in 2021.
This pledge is logical both environmentally and practically: having one heat pump replace two systems saves valuable space, lowers costly installation and maintenance fees, and reduces energy demands. The NYC government realized this potential and signed a $70,000,000 contract to install 30,000 window heat pumps in NYCHA buildings, better known as public housing. Two heating companies, Midea and Gradient, will provide these units.
In late 2023, Gradient installed 36 preliminary test units in NYCHA buildings. Most NYC steam boilers, including those in NYCHA’s current system, are powered by gas with oil reserves in case of an emergency. Gradient found that their pump can lower tenants’ heating bills by 29-62% on moderate winter days compared to gas-powered boilers. Savings are as high as 59-78% compared to oil-burning boilers. In testimonials that Gradient collected, NYCHA tenants noted the heat pumps’ impressive air filtration, heating, and operational capabilities. Midea conducted similar tests and soon plans to release its heat pump for public purchase.
The Cold Drawbacks of Heat Pumps
Although technological faults remain, NYC is continuing its plans to install and promote heat pumps to replace its intensive, outdated systems. For one, Midea’s upcoming pump will cost ~$3,000 per unit, greatly exceeding the combined price of ~$460 for their bestselling, single-room heating and cooling systems. This is a misleading comparison, however, because heat pumps also act as heating systems. The technology can lower electricity and fuel bills over an extended period, but the current price point makes heat pumps an unaffordable investment for many households – despite government subsidies and incentives. Even the NYC government’s bulk order of Midea and Gradient pumps averages over $2,300 per unit.
Furthering the inaccessibility of these systems, the most advanced, aesthetically pleasing, and apartment-friendly heat pumps can only heat and cool individual rooms. This means that multiple units must be purchased, installed, and powered to service a home, and each must be replaced about every 20 years. Still, NYC’s firm stance on heat pumps indicates the climate control systems’ proven efficacy, practicality, and sustainability.
Heat Pumps Globally, and Plans for the Future
While technological challenges remain, NYC is continuing to deliver on its pledges. This decision on heat pumps is being made throughout the United States (US). In 2022, heat pump sales in the US significantly outpaced those of gas furnaces (a type of central heating system particularly popular in North America). This lead has continued into 2025 as more people realize that the pumps can lower fossil fuel emissions and energy bills.
This switch is not just happening in the US; countries worldwide are beginning – or continuing – to invest in these pumps. Sales in European countries have soared in the 21st-century, an accomplishment partly attributed to friendly government policy. The country with the most pumps relative to its population, Norway, has 632 heat pumps installed for every 1,000 households (the majority of these pumps service entire houses, unlike the Midea and Gradient systems discussed above). Despite this high ownership rate, 48 pumps were purchased in Norway for every 1,000 households in 2024.
In spite of these promising statistics, heat pump sales in most economies have either slowed or slumped in recent years – particularly in Europe. Analysts suspect this is due to high interest rates, rising electricity prices, low consumer confidence, and low gas prices. While this is discouraging, pump sales and ownership rates remain higher than they were several years ago. In 2023, New York Governor Kathy Hochul pledged to help the U.S. Climate Alliance (USCA) install 20,000,000 pumps across the U.S. The USCA is a coalition of 24 governors representing 54% of the United States population and 57% of its economy. The bipartisan group has successfully delivered on their promises of emissions reduction, climate resilience, economic growth, energy savings, and zero-carbon electricity standards that heat pumps are engineered to meet.
This coalition has proved that environmental action is popular, necessary, and possible. At a time when climate policy is under question, sustainable and feasible technologies – like heat pumps – need the investment of citizens, industries, and governments alike; no matter how small the scale.
So, how can you help? Since 2022, the US government has given a federal tax credit to citizens who install efficient heat pumps. The Energy Efficient Home Improvement Credit provides eligible homeowners up to $2,000 annually. Combined with other energy-efficient credits, US citizens can regain up to $3,200 every year. These monetary incentives offer another reason to consider switching to heat pumps, and similar policies are being enacted worldwide.
I am proud to live in a city that rewards and encourages the sustainability of citizens, corporations, and public works. As the severity and irreversibility of global warming loom, heat pumps offer us a breezy solution to polluting climate control systems. Eventually, NYC’s infamous boiler rooms and clanging pipes may become relics of the past.
While the concepts of space and time were fundamental to the Newtonian world, centuries of digging deeper into the mechanics of our universe have uncovered that it isn’t all as simple as it seems. From Einstein’s Special Relativity to theories of multi-dimensional time, the science behind space and time has evolved into a complex field.
Why Extra Temporal Dimensions?
The search for extra spatial dimensions raises questions of the potential for extra temporal dimensions. If space can have more dimensions, why can’t time? The motivations to explore the potential for extra temporal dimensions arise from a desire to better understand the nature of time and the symmetries between them.
Another reason to study these extra-temporal dimensions is the desire to unify seemingly disconnected parts of time. Many frameworks for extra temporal dimensions have revealed previously unnoticed symmetries and relationships between different temporal systems that would not be discovered while only working in one dimension.
The concept of “complex time” is used to fix some of the problems of quantum mechanics. This idea suggests that time should be represented as a complex value rather than a real number. It would allow more ways to represent wave-particle duality, entanglement, and other fundamental concepts of quantum physics.
2T-Physics
Proposed by physicist Itzhak Bars, 2T-Physics suggests that the one dimension of time we experience is really just a “shadow” of the real two dimensions of time. The core motivation of 2T-Physics is to reveal the deeper temporal connections that we don’t see in our one-dimensional perspective. In 2T-Physics, two seemingly disconnected temporal systems are actually connected and represent different views or ‘shadows’ of the same two-dimensional time.
2T-Physics unifies a wide range of physical systems using “gauge symmetry,” which is the property of a system where a set of transformations, called gauge transformations, can be used on a system without changing any of the physical properties of that system. Bars also illustrated that the Standard Model could be explained by 2T-Physics with four spatial dimensions. Not only can this model predict most of the Standard Model, but it also provides a solution to some quantum issues.
An interesting difference between the Standard Model and the predictions of 2T-Physics is the gravitational constant. While it is currently established that the coefficient in gravitational equations is a constant 6.67⋅10-11, the mathematics of 2T-Physics means that the gravitational constant has different values for different periods of our universe (inflation, grand unification, etc). This allows new possibilities for early expansion of our universe that General Relativity and the Standard Model do not. Through its new perspectives, 2T-Physics allows a more complete framework of gravity, especially at higher dimensions.
While 2T-Physics is well-established, it remains highly theoretical and has little to no practical impact. While there is no evidence directly supporting the theory, 2T-Physics predicts certain connections between different physical systems that could potentially be verified through complex experiments, though none have been conducted so far. Above all, 2T-Physics provides a new perspective on time and the nature of the laws of physics that has opened the eyes of many scientists and will likely inspire future discoveries.
3D Time
One of the most recent papers in the field, Kletetschka, proposes a mathematical framework of spacetime that includes temporal dimensions. Kletetschka provides a new perspective on combining gravity and quantum mechanics. Instead of having two hidden dimensions of time, Kletetschka theorizes that each of these dimensions is used to represent time at different scales: the quantum scale, the interaction scale, and the cosmological scale. He explains that the other two dimensions are not visible in our daily life because they occur at very small (quantum) levels or very large (cosmological) levels.
A massive difference between this theory and conventional physics is that while conventional physics considers space to be something vastly different from time, Kletetschka proposes that space is a byproduct of time in each of these dimensions, rather than an entirely separate entity. What we experience as mass or energy actually arises from the curvature of time in these three dimensions. As Kletetschka explored more into this, he discovered surprising consistency in the mathematics, leading to a deeper exploration into the concept.
The key to not creating causality issues and instability in the theory was the usage of regular geometry and spatial dimensions instead of exotic situations that are hard to prove or test. This theory aimed to address many of the long-standing issues in quantum mechanics, and its success thus far makes it a prominent theory in the field.
The theory is able to add extra temporal dimensions without causing causality issues, something very few theories of its type have been able to grapple with. This is due to its structure. The theory is designed so that the three axes share an ordered flow, preventing an event from happening before its cause. Furthermore, these three axes operate at very different scales, leaving very little overlap between them. The mathematics of the framework does not allow for the alteration of events in the past, something that many other theories allow.
The theory is able to offer physical significance and a connection to our world alongside mathematical consistency. Things such as finite quantum corrections, which other theories were not able to predict, were mechanized by this model without creating extra complexity.
This mathematical framework is able to predict several properties and new phenomena that can be experimentally tested, allowing pathways to prove or disprove it soon. Meanwhile, many scientists have spoken in support of the theory, considering it a promising candidate for a near “Theory of Everything” just a few months after its publication.
Conclusion
While the theoretical motivation for extra dimensions is compelling, the reality of their existence remains unconfirmed. Meanwhile, the scientific community works to experimentally prove or disprove their existence through observational evidence.
The Large Hadron Collider (LHC) at CERN is one of the major players on the experimental side. They engage in many experiments, a few of which I have highlighted below.
Tests for Microscopic Black Holes: Many of the theories that propose extra dimensions lead to increased gravitational power within short distances. This manifests physically as microscopic black holes that would dissipate near instantaneously due to Hawking Radiation. However, the byproduct of this dissipation would be particles detected through the LHC.
The Graviton Disappearance: Another common feature of extra-dimensional theories is the manifestation of gravity as a particle called a graviton. That particle would disappear into these extra dimensions, taking energy with it. This would result in an imbalance in the total energy of the system.
While experiments have managed to provide more limitations for potential values that would work in certain theories, they have yet to prove or disprove them.
Meanwhile, it is important to consider what extra dimensions would mean for us and the way we live. The concept of extra dimensions provides multiple philosophical considerations for us as humans. This concept completely changes our worldview and affects our perception of the universe. Dr. Michio Kaku explains this through the analogy of a fish in a pond, unaware of the world outside its simple reality. Our perception of reality is limited, not only by our understanding of physics, but also by the biology of our brains.
The work towards a “Theory of Everything” is not only a physical goal but a philosophical one as well. We strive to understand our universe and everything within it in the simplest way possible. It embodies human desire for ultimate knowledge and drives centuries of physical progress.
Overall, the concept of extra dimensions represents one of the most arduous and ambitious goals in human history. While they lack proof, these theories motivate people to search more into the nature of our universe and question the very fabric of our reality. The exploration into further discoveries about our universe truly shows who we are as humans and will continue to motivate centuries of physicists to question the very nature of everything.
DUFF, M. J. (1996). M THEORY (THE THEORY FORMERLY KNOWN AS STRINGS). International Journal of Modern Physics A, 11(32), 5623–5641. https://doi.org/10.1142/s0217751x96002583
Gunther Kletetschka. (2025). Three-Dimensional Time: A Mathematical Framework for Fundamental Physics. Reports in Advances of Physical Sciences, 09. https://doi.org/10.1142/s2424942425500045
Kalligas, D., S, W. P., & Everitt,. (1995). The classical tests in Kaluza-Klein gravity. The Astrophysical Journal, Part 1, 439(2). https://ntrs.nasa.gov/citations/19950044695
Lloyd, S., Maccone, L., Garcia-Patron, R., Giovannetti, V., Shikano, Y., Pirandola, S., Rozema, L. A., Darabi, A., Soudagar, Y., Shalm, L. K., & Steinberg, A. M. (2011). Closed Timelike Curves via Postselection: Theory and Experimental Test of Consistency. Physical Review Letters, 106(4). https://doi.org/10.1103/physrevlett.106.040403
While the concepts of space and time were fundamental to the Newtonian world, centuries of digging deeper into the mechanics of our universe have uncovered that it isn’t all as simple as it seems. From Einstein’s Special Relativity to theories of multi-dimensional time, the science behind space and time has evolved into a complex field.
What are Extra Spatial Dimensions?
As scientists explored further into spacetime, theories of more dimensions of space, beyond the three we know, were suggested as a way to explain many of the phenomena that we cannot explain with only three dimensions. These ideas gained most of their traction from the pursuit to combine quantum mechanics with General Relativity, especially issues such as quantum gravity. These theories also attempt to address the rapid growth of the universe after the Big Bang.
What were the motivations to search for Extra Dimensions?
The idea of more dimensions began as a way to unify the fundamental forces of our universe. Modern theories regarding these ideas come from a drive to resolve some of the unaddressed issues of the Standard Model of physics. While the Standard Model is able to describe fundamental particles and the strong, weak, and electromagnetic forces, it is unable to describe gravity. In addition, the Standard Model cannot address dark matter and dark energy, which make up the majority of our universe.
One of the most significant problems in physics is the Hierarchy problem. It refers to the massive gap in strength between gravity and the other three fundamental forces. This extreme difference comes from the small scale of the strength of gravity in comparison to the other forces. Extra-Dimensions have attempted to resolve this by suggesting that while gravity may be just as strong as the other forces, its strength is leaked into the other dimensions, thus weakening it.
This search to discover extra dimensions is not only about solving these specific technical issues; it’s about the centuries-long quest to find a Theory of Everything. Physicists constantly strive to find simpler solutions to describe our universe rather than leaning on hyperspecific coefficients/constants.
While there are many theories involving extra-spatial dimensions, part 2 will focus on a few of the biggest and most influential theories so far.
Kaluza-Klein Theory
In 1919, Theodor Kaluza proposed his theory of four-dimensional space as an attempt to combine gravity and electromagnetism. This theory was later built upon by Oscar Klein in 1926.
In Kaluza’s attempt to combine these fundamental forces, he suggested a fourth, unseen spatial dimension. To create this system, he used Einstein’s equations and extended them into a fifth dimension. He found that the five-dimensional version of Einstein’s equations naturally created the four-dimensional version in one part. The equation had fifteen components, ten of which described our four-dimensional General Relativity. Four of the remaining five described the electromagnetic force through Maxwell’s equations, while the last dimension was the scalar field, which had no known use.
A key concept of Kaluza-Klein theory is that, rather than seeing electric charge as simply an event or calculation, it is represented as the motion of the fifth dimension. The attempt to create the simplest mathematical structure that could represent the five dimensions led to the assumption that no part of the five-dimensional Einstein equations relied explicitly on this fifth dimension. Instead, its presence was there to alleviate other issues in the Standard Model without disrupting the basic functions of Einstein’s equations. In order to do this, Kaluza created the cylinder condition, where he described all coordinate values in the fifth dimension to be zero, effectively hiding it at a macroscopic level, preserving the four dimensions that we experience.
Oscar Klein produced a physical explanation for the cylinder condition in 1926. He suggested that the fifth dimension was compactified and curled up into an unobservable circle with an incredibly small radius, explaining that this is why we are unable to witness the fifth dimension.
An interesting way to understand this is to think of a hose. From a distance, the hose looks like a single-dimensional line. However, the hose actually has two dimensions, both a dimension of length as well as a circular dimension.
This theory revolutionized how physicists thought about spacetime. In a letter to Kaluza that same year, Einstein wrote,
“The idea of achieving unification by means of a five-dimensional cylinder world never dawned on me […]. At first glance, I like your idea enormously. The formal unity of your theory is startling.” (Einstein, 1919)
Over time, Kaluza-Klein theory has been disproven due to its several fundamental flaws. Scientists have tested for Kaluza-Klein resonances, particles that would have to exist if the theory were to be true, and have found none. In addition, Kaluza-Klein theory only addresses gravity and electromagnetism but excludes the strong and weak forces. When incorporated with quantum mechanics, Kaluza-Klein theory predicts many incorrect values for otherwise known constants, showing massive discrepancies. Despite these issues, Kaluza-Klein theory has long been considered the first step into the exploration of extra-dimensions, becoming the precursor to many theories in the decades after. Its core idea- that hidden dimensions cause forces in our four dimensions-has been crucial to further exploration into the concept of spacetime.
String Theory is a very common term, but few people actually know what it means. String theory proposed that instead of the universe being made up of zero-dimensional points, it is made up of strings that vibrate. The specific vibration of these strings would determine what they would be (photon, quark, etc.). The theory aimed to unify all of these different particles and properties into one thing: the string.
When physicists first began to work on String Theory, they found many mathematical issues, such as negative probabilities. In four dimensions, these strings don’t have enough space to produce the wide range of vibrations needed to create all the particles in the standard model. Thus, Superstring Theory suggests that these strings are ten-dimensional objects (nine dimensions of space and one of time). A major reason why physicists were happy with string theory at the time was that it naturally predicted a particle called a ‘graviton’. This particle would have the same effect as the force of gravity. Theoretical physicist Edward Witten has commented on this by saying,
“Not only does [string theory] make it possible for gravity and quantum mechanics to work together, but it […] forces them upon you.” (Edward Witten, NOVA, PBS)
M-Theory is an extension of String Theory that adds one more spatial dimension. Prior to its creation, different groups of physicists had created five versions of String Theory.
However, a true “Theory of Everything” should be one theory, not five possibilities.
M-Theory was created as an attempt to unify these five types of string theory. The key to the development of M-Theory was the discovery of mathematical transformations that took you from one version of String Theory to another, showing that these were not truly separate theories. M-theory theorized that these different versions were just different approximations of the same theory that could be unified by adding another dimension. M-Theory’s eleven-dimensional framework allowed for the unification of these five theories alongside the theory of supergravity.
M-Theory, similarly to Kaluza-Klein Theory, also proposes that the extra dimensions are curled up and compacted. M-Theory uses a specific geometric shape, known as a Calabi-Yau manifold, to create the physical effects we observe in our four dimensions from the other hidden seven. Calabi-Yau manifolds are a highly compact and complex type of manifold that are the foundation of M-Theory because they allow complex folding without affecting the overall curvature of our universe through a property called “Ricci-flatness”. The Calabi-Yau manifolds also have “holes” within their shapes that are thought to connect to the number of families of particles we experience in the Standard Model. This introduces the key concept that, instead of the fundamental laws of physics just being rules, they are actually geometric properties of our universe.
The biggest challenge that M-Theory is facing is its lack of experimental evidence. Predictions made by this model are not testable by currently available or foreseeable technology due to the high-dimensional microscopic levels required. Without making testable predictions, the theory remains just a theory for the time being.
Despite this lack of proof, many physicists still see M-Theory as a prominent candidate in our search for a “Theory of Everything”. Its mathematical consistency and its ability to unify both gravitational and quantum effects lead to it being considered highly promising.
However, while the math behind M-Theory is highly developed, it is not yet complete. The theory is still a work in progress as research is being conducted to better understand its structure and significance.
Meanwhile, critics believe that M-Theory is fundamentally flawed. Many of them believe that the “Landscape” problem is a significant reason that M-Theory is untrue. The “Landscape” problem is described as the fact that the theory predicts many different universes, each with its own set of physical laws. Critics believe that this prediction proves the unreliability of M-Theory and that a true “Theory of Everything” would be applicable only to our universe.
Overall, M-Theory has neither been proven nor disproven and remains a crucial area for future exploration.
While the concepts of space and time were fundamental to the Newtonian world, centuries of digging deeper into the mechanics of our universe have uncovered that it isn’t all as simple as it seems. From Einstein’s Special Relativity to theories of multi-dimensional time, the science behind space and time has evolved into a complex field.
Newtonian Absolutism
At the dawn of classical mechanics, Newton created the foundation upon which all of modern spacetime theory is built. Space and time were considered to be entirely unrelated and absolute concepts. There was no question in his mind that time moves forward and space exists around us. Space was considered a static body within which we exist, while time was described as flowing in only one direction at a steady rate. Imagine space as a box, where events are contained within, and time as a river whose current pulls us along.
Newton coined the terms ‘absolute space’ and ‘absolute time’ to describe the absolutes from the relativity we measure. For centuries, this theory remained unquestioned, so physicists didn’t consider time and space to be real entities, but rather our human way of interpreting the world around us.
Einstein’s Revolution:
Special Relativity
The first true challenge to the Newtonian perspective of space and time came in the form of Einstein’s Special Relativity. He introduced one key revolutionary concept: everything, including space and time, is relative, depending only upon the observer’s frame of reference.
The motivations for Einstein’s work arose from the desire to eliminate the contradiction between Maxwell’s equations and Newtonian Mechanics. A simple way to visualize this contradiction is by imagining the following scenario:
Two rockets in space are flying towards each other at a speed of 500 miles per hour. This would result in a relative speed of 1000 miles per hour. Now, if you were to throw a rock from one ship to another at a speed of 10 miles per hour, it would reach the other ship with a relative speed of 510 miles per hour. However, the substitution of light into this situation instead of a rock changes this because the speed of light is constant. No matter how fast you travel towards light, it will always come towards you at the same constant speed: 3·108m/s, or the speed of light.
Many tests were done to prove that the wave-particle duality of light was the reason for this phenomenon. Rather than trying to disprove or explain away the theory, Einstein decided to take the constant speed of light as a fundamental property. He didn’t explain the speed of light, but used it to explain other things. Einstein was willing to give up the time-honored fundamentals of Newton’s laws in favor of the constant speed of light.
He began with the basic definition of speed as the distance divided by the time. If the speed of light remains constant as this rocket reduces the distance to be travelled, then the time must also decrease to preserve this equality. When mathematically calculating this, Einstein discovered the concept of time dilation, where objects in motion experience time more slowly than objects at rest. Continuing with similar methods for other properties, such as conservation, he discovered that mass would increase with speed and length would decrease. The true genius in Einstein was his willingness to question his own assumptions and give up some of the most basic qualities of the universe, in favor of the speed of light.
General Relativity
Special Relativity, however, did not incorporate gravity. Before Einstein, physicists believed that gravity was an invisible force that dragged objects towards one another. However, Einstein’s general relativity suggested that the ‘dragging’ was not gravity, but rather an effect of gravity. He theorized that objects in space bent the space around them, inadvertently bringing objects closer to one another.
General Relativity defines spacetime as a 4D entity that has to obey a series of equations known as Einstein’s equations. He used these equations to suggest that gravity isn’t a force but instead a name we use to describe the effects of curved spacetime on the distance between objects. Einstein proved a correlation between the mass and energy of an object and the curvature of the spacetime around it.
His work allowed him to prove that:
“When forced to summarize the general theory of relativity in one sentence: Time and space and gravitation have no separate existence from matter.” -Einstein.
Einstein’s General Relativity predicted many things that were only observationally noticed years later. A famous example of this is gravitational lensing, which is when the path of light curves as it passes a massive object. This effect was noticed by Sir Arthur Eddington in 1919 during a solar eclipse, yet Einstein managed to predict it with no physical proof in 1912.
Closed-Timelike-Curves (CTCs)
Another major prediction made by Einstein’s General Relativity is Closed-Timelike-Curves (CTCs), which arise from mathematical solutions to Einstein’s equations. Some specific solutions to these equations, such as massive, spinning objects, create situations in which time could loop.
In physics, objects are considered to have a specific trajectory through spacetime that will indicate the object’s position in space and time at all times. When these positions in spacetime are connected, they form a story of an object’s past, present, and future. An object that is sitting still will have a worldline that goes straight in the time direction. Meanwhile, an object in motion will also have an element of spatial position. Diagrams of a worldline are drawn as two light cones, one into the future and one into the past, with a spatial dimension on the other axis, as seen in figure 1.
CTCs are created when the worldline of an object is a loop, meaning that the object will go backwards in time at some point to reconnect to its starting point. Closed-Timelike-Curves are, in essence, exactly what they sound like: closed curving loops that travel in a timelike way. Traveling in a timelike way, meaning that their change in time is greater than their change in space, suggests that these objects would have to be static or nearly static. As seen in Figure 2, the worldline of a CTC would be a loop, as there is some point in space and time that connects the end and the beginning.
Two major examples of famous CTC solutions are the Gödel Universe and the Tipler Cylinder:
Gödel Universe: Suggested by mathematician Kurt Gödel in 1949, the Gödel Universe is a rotating universe filled with swirling dust. The rotation must be powerful enough that it can pull the spacetime around it as it spins. The curvature would become the CTC. This was the first solution found that suggested the potential for time-travel to be a legitimate possibility, not just a hypothetical scenario.
Tipler Cylinder: In the 1970s, physicist Frank Tipler suggested an infinitely long, massive cylinder spinning along the vertical axis at an extremely high speed. This spinning would twist the fabric of spacetime around the cylinder, creating a CTC.
Closed timelike curves bring many paradoxes with them, the most famous of which is the grandfather paradox. It states that if a man has a granddaughter who goes back in time to kill her grandfather before her parents are born, then she wouldn’t exist. However, if she doesn’t exist, then there is no one to kill her grandfather, thus meaning that she must exist. Yet if she exists, then her grandfather doesn’t.
Most importantly, CTCs drove further exploration and directed significant attention to the spacetime field for decades. Scientists who didn’t fully believe Einstein’s General Relativity pointed to CTCs as proof of why it couldn’t be true, leaving those who supported Einstein to search extensively for a way to explain them. This further exploration into the field has laid the foundation for many theories throughout the years.
The belief amongst scientists is that CTCs simply don’t exist because, while they are hypothetically possible, the energy requirements to create them are not yet feasible. Many of these setups require objects with negative energy density and other types of ‘exotic matter’ that have not been proven to even exist yet. Furthermore, even if CTCs were to be formed, the specific region of spacetime where they form would be highly unstable, meaning that these CTCs would not sustain themselves. The situations in which CTCs would be feasible require types of fields of energy that would approach infinity and the Cauchy Horizon (the limit at which causality no longer exists, therefore making these situations physically unviable).
Advancements in genetic engineering have brought revolutionary tools to the forefront of biotechnology, with CRISPR leading as one of the most precise and cost-effective methods of gene editing. CRISPR, which stands for Clustered Regularly Interspaced Short Palindromic Repeats, allows scientists to alter DNA sequences by targeting specific sections of the genome. Originally discovered as part of a bacterial immune system, CRISPR systems have now been adapted to serve as programmable gene-editing platforms. This paper explores how CRISPR works, its current uses, its future potential, and the ethical considerations surrounding its application in both human and non-human systems.
How CRISPR System Works
The CRISPR-Cas system operates by combining a specially designed RNA molecule with a CRISPR-associated protein, such as Cas9 or Cas12a. The RNA guides the protein to a specific sequence in the genome, where the protein then cuts the DNA. Once the strand is cut, natural repair mechanisms within the cell are activated. Researchers can either allow the cell to disable the gene or insert a new gene into the gap. As described by researchers at Stanford University,
“The system is remarkably versatile, allowing scientists to silence genes, replace defective segments, or even insert entirely new sequences.” (CRISPR Gene Editing and Beyond)
This mechanism has been compared to a pair of molecular scissors that can cut with precision. For example, the Cas9 protein is programmed with a guide RNA to recognize a DNA sequence of about 20 nucleotides. Once it finds the target, it makes a double-stranded cut. The repair process that follows enables gene knockouts, insertions, or corrections. This technology has dramatically reduced the time and cost associated with gene editing, making previously complex tasks achievable in weeks rather than months. According to a 2020 review,
“CRISPR/Cas9 offers researchers a user-friendly, relatively inexpensive, and highly efficient method for editing the genome.” (Computational Tools and Resources Supporting CRISPR-Cas Experiments)
CRISPR’s influence extends across many fields, but its role in medicine has attracted the most attention. Scientists are using CRISPR to treat genetic diseases such as sickle cell anemia by editing patients’ own stem cells outside the body and then reinserting them. In 2023, researchers published results showing that a single treatment could permanently alleviate symptoms for some patients with these genetic diseases (Zhang 4.) Another area of exploration includes its potential for treating cancers by modifying immune cells to better recognize and destroy cancerous tissue. According to Molecular Cancer,
“Gene editing technologies have successfully demonstrated the correction of mutations in hematopoietic stem cells, offering hope for long-term cures.” (Zhang 3)
Beyond human health, CRISPR has transformed agricultural practices. Scientists are using it to develop crops that resist pests, drought, or disease without the need for traditional genetic modification methods that insert foreign DNA. One of the longer processes of traditional modifications in DNA could include conjugation. This is moving genetic material through bacterial cells in a direct contact. Conjugation is just one example of many of the traditional genetic modification methods.
CRISPR has been used to produce tomatoes with longer shelf lives and rice varieties that can survive in low-water environments. According to the World Economic Forum,
“CRISPR can help build food security by making crops more resilient and nutritious.” (CRISPR Gene Editing for a Better World)
Such developments are increasingly critical in addressing global food demands and climate challenges.
Research is also underway to apply CRISPR in animal breeding and disease control. In mosquitoes, scientists are testing ways to spread genes that reduce malaria transmission. In livestock, researchers are working to produce animals that are more resistant to disease. These experiments, while promising, require cautious monitoring to ensure ecosystem stability and safety.
Future Potential
Looking ahead, new techniques are refining CRISPR’s capabilities. Base editing allows researchers to change a single letter of DNA without cutting the strand entirely, reducing the off-targeting effect such as prime editing, a newer method that uses an engineered protein to insert new genetic material without causing double-stranded breaks. These tools provide even more control. According to the Stanford report,
“Prime editing may become the preferred approach for correcting single-point mutations, which are responsible for many inherited diseases.” (CRISPR Gene Editing and Beyond)
Possible Concerns
Despite its potential, CRISPR also raises important ethical concerns. One of the most debated topics is germline editing, or the modification of genes in human embryos or reproductive cells. Changes made at this level can be passed down to future generations, leading to unknown consequences. In 2018, the birth of twin girls in China following germline editing sparked international outrage and led to widespread calls for stricter regulation. The scientific community responded swiftly, with many organizations calling for a global prohibition on clinical germline editing. As CRISPR & Ethics – Innovative Genomics Institute (IGI) states,
“Without clear guidelines, genome editing can rapidly veer into ethically gray areas, particularly in germline applications.”
Another concern is the potential for unintended consequences, known as off-target effects. These are accidental changes to parts of the genome that were not intended to be edited, which could lead to harmful mutations or unforeseen health problems. I will expand on this later in the article. Researchers are actively developing tools to better predict and detect such errors, but long-term safety remains a topic of study. The possibility of using CRISPR for non-therapeutic purposes, such as enhancing physical or cognitive traits.
Cost and accessibility are also significant factors. Although the CRISPR tools themselves are affordable for research institutions, the cost of CRISPR-based therapies remains high. According to Integrated DNA Technologies,
“Therapies based on CRISPR currently cost hundreds of thousands of dollars per patient, limiting their availability.” (CRISPR-Cas9: Pros and Cons)
Bridging this gap requires investments in infrastructure, policy development, and global partnerships to ensure that developing countries are not left behind.
In conclusion, CRISPR is reshaping the landscape of genetics and biotechnology. It has already brought major advances to medicine, agriculture, and environmental science. While the technology is still evolving, its precision offers a glimpse into the future of human health. CRISPR the potential to unlock solutions to some of humanity’s most pressing challenges.
Lino, Cathryn A., et al. “Delivering CRISPR: A Review of Methods and Applications.” Drug Delivery and Translational Research, vol. 8, no. 1, 2020, pp. 1–14. PubMed Central, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7427626/. Accessed 31 July 2025.
Imagine a world where every surface—the walls, the roof of your car—harnesses the sun to power your surroundings. Not with stiff, bulky solar panels, but with something as simple and inconspicuous as paint.
Thanks to new and evolving technology, this vision inches closer and closer to reality. Perovskite-based photovoltaic paint is a developing technology with the potential to turn any paintable surface into a solar panel.
What are Perovskites?
Perovskites are a class of crystalline materials with the structural formula ABX₃. ABX₃ means that perovskites have a Large Cation(A), a Smaller Cation(B), and an Anion(X₃, often a halide). Their unique structure makes them incredibly efficient at converting sunlight into electricity, with recent developments reaching over 25% efficiency (25% of energy from the sun was converted into electricity), while traditional solar panels usually have 15-25% efficiency.
The Parts of Perovskite Solar Paint:
Perovskite-based solar paint must be applied in multiple layers. The six main layers, in order, are: the transparent conductive layer (front/top electrode), electron transport layer, perovskite absorber layer, hole transport layer, back electrode, and substrate.
The transparent conductive layer functions as the front electrode. It must be transparent, to allow sunlight to pass through, and conductive, to carry the extracted electrons.
Next is the electron transport layer, which extracts and transports electrons from the perovskite layer to the electrode and prevents holes from moving in the wrong direction.
The perovskite absorber layer is located at the center and is made of a perovskite compound that absorbs sunlight to create electron-hole pairs (excitons). It acts as the photoactive layer where sunlight is converted into electricity.
The hole transport layer lies below, which extracts and transports holes (the positive charges) to the back electrode and blocks electrons from going backward, aiding in charge separation.
The back electrode then collects the holes and completes the electrical circuit, allowing current to flow through an external device.
Finally, the substrate is the surface being painted (can be glass, plastic, metal, etc.) and provides structural support.
How Perovskite Solar Paint Works:
Sunlight first hits a perovskite layer, and the perovskite material absorbs photons. This excites electrons from the valence band to the conduction band, creating electron-hole pairs (excitons). In perovskites, excitons require little energy to separate into electrons and holes, which improves efficiency. Electrons are pushed toward the electron transport layer and holes toward the hole transport layer. The front and back electrodes collect the charges, and because oppositely-charged electrons and holes are separated and collected on different sides, a voltage builds up between the two electrodes. When the painted solar surface is connected to a circuit, the voltage drives electrons through the wire, powering a device or charging a battery.
A Game-Changer for Clean Energy
Perovskite-based photovoltaic paint could radically transform the solar energy industry. Unlike traditional silicon, which requires high temperatures and vacuum conditions for production, perovskite materials are cheap and efficient. Perovskite paint can also be applied to a wide variety of surfaces, allowing homeowners to harness solar power in places where solar panels are impossible.
The Challenges to Implementation
As promising as perovskite solar paint is, several significant challenges stand in the way of widespread implementation. Current perovskite materials are highly sensitive to moisture, heat, and UV light, meaning they degrade quickly outdoors. While silicon panels can last 25 years or more, early perovskite prototypes can lose efficiency after months or just weeks. Researchers are working on protective coatings and new formulations to address this, but achieving long-term durability remains a hurdle. Most high-efficiency perovskite formulas also contain lead or other toxic heavy metals, raising concerns about environmental contamination and safe handling.
Efforts to develop lead-free perovskites are ongoing (tin being a promising alternative), though they currently offer lower efficiency and a shorter lifespan. While perovskite solar paint and panels work well in laboratory settings, scaling up to commercial production is complex. A uniform coating that ensures proper perovskite crystallization must be applied over large areas, and surfaces must be treated to ensure adhesion and conductivity. In addition, regulatory bodies are still developing safety and performance standards for perovskite technologies. Gray areas remain about how these materials will be certified/recycled at the end of their lifespan.
Global Progress and Investment
In the U.S., the Department of Energy recently allocated over $40 million to perovskite R&D, focusing on improving durability and scaling up production methods. Startups like SolarPaint, Oxford PV, and Saule Technologies compete to bring the first market-ready products to consumers, while well-known companies like Mercedes-Benz seek to implement solar paint in their newest vehicles.
Conclusion
Perovskite-based photovoltaic paint is still in the early stages, but it represents one of the most exciting frontiers in renewable energy. If challenges like stability and toxicity can be solved, any painted surface could soon become a power source. Keep an eye on your walls—they might power the world someday.
Glossary
Valence Band:
The highest range of electron energies where electrons are normally present at low energy (ground state)
Valence electrons reside in the valence shell of atoms
In any given material, atoms are packed closely together so their valence shells overlap and form the valence band
Electrons here are bound to their atoms and don’t move freely.
Band Gap:
The energy gap between the valence band and conduction band.
Electrons must absorb enough energy (like from sunlight) to jump across this gap.
The larger the gap, the more energy it takes to jump across, and the less conductive a material is
Semiconductors like perovskites have a small gap(1-2 electron volts) and can conduct electricity if energy is added(sunlight)
Conduction Band:
The higher energy band where electrons are free to move through the material.
Electrons in this band can carry electricity.
Electron-hole pairs:
When a photon(light) hits the perovskite, it transfers energy to an electron, exciting it from the valence band to the conduction band.
The excited electron in the conduction band moves freely and can conduct electricity.
The “hole” is the spot the electron left behind—a positive charge in the valence band.
There is now an electron-hole pair
Exciton:
An exciton is the state where an electron and a hole are bound together, still attracted to each other by opposing charges
Formed right after light absorption, before the electron fully separates from the hole/jumps to the conduction band.
Neutral overall, so they don’t conduct electricity until they break apart.
Common in some perovskites
Front and Back Electrode:
They collect and transport electrical charges (electrons and holes) generated by sunlight.
They’re like the “wires” of the solar paint that let electricity flow out into a usable circuit.
Front electrode: Lets light in and collects electrons or holes(depends on design, usually electrons)
Back electrode: Collects the opposite of what the front electrode does(back electrode usually collects holes) and helps drive current through an external circuit
Electron transport layer:
Extracts and transports electrons to the correct electrode
Hole transport layer:
Extracts and transports holes to the correct electrode
The transport layers guide the charges(electrons(-) and holes(+)) to the correct electrodes, helping to prevent recombination (when electrons and holes meet and cancel each other out).
Voltage:
Voltage is defined as the electric potential difference between two points.
It tells you how much “push” electrons are getting.
Measured in volts (V)
Voltage is like water pressure in a pipe. The higher the pressure, the more push the water (electrons) is getting
Current:
Definition: Current is the rate at which electric charge flows past a point.
Measured in amperes (A), or amps
More current = more electrons moving through the wire per second
Current is like the amount of water flowing through the pipe. The wider or faster the flow, the higher the current.
Power:
Definition: Power is the rate at which electrical energy is used or produced
Measured in watts
Formula: Power (P) = Voltage (V) × Current (I)
Power is like how much water pressure × amount of water is turning a waterwheel—how much work is being done.
Alanazi, T. I. (2023). Current spray-coating approaches to manufacture perovskite solar cells. Results in Physics, 44, 106144. https://doi.org/10.1016/j.rinp.2022.106144
Bishop, J. E., Smith, J. A., & Lidzey, D. G. (2020). Development of Spray-Coated Perovskite Solar Cells. ACS Applied Materials & Interfaces, 12(43), 48237–48245. https://doi.org/10.1021/acsami.0c14540
Chowdhury, T. A., Bin Zafar, Md. A., Sajjad-Ul Islam, Md., Shahinuzzaman, M., Islam, M. A., & Khandaker, M. U. (2023). Stability of perovskite solar cells: issues and prospects. RSC Advances, 13(3), 1787–1810. https://doi.org/10.1039/d2ra05903g
Khatoon, S., Kumar Yadav, S., Chakravorty, V., Singh, J., Bahadur Singh, R., Hasnain, M. S., & Hasnain, S. M. M. (2023). Perovskite solar cell’s efficiency, stability and scalability: A review. Materials Science for Energy Technologies, 6, 437–459. https://doi.org/10.1016/j.mset.2023.04.007
“Some experts in the field predict that the first quantum computer capable of breaking current encryption methods could be developed within the next decade. Encryption is used to prevent unauthorized access to sensitive data, from government communications to online transactions, and if encryption can be defeated, the privacy and security of individuals, organizations, and entire nations would be under threat.” – The HIPAA Journal
Introduction
The cybersecurity landscape is facing a drastic shift as the increasing power of quantum computers threatens modern encryption. Experts predict a quantum D-day (Q-day) in the next 5-10 years, when quantum computers will be sufficiently powerful to break through even the strongest of cybersecurity mechanisms. Meanwhile, few companies have begun to prepare against the threat, developing quantum resistant cybersecurity methods. However, to fully combat the threat, we need to act now.
Encryption Today
Modern cryptography is dominated by two major algorithms that transform ordinary text into ciphertext:
1. Rivest-Shamir-Adleman (RSA)
Dating back to 1977, the RSA algorithm relies on the factoring of large numbers. RSA can be separated into two parts, a private and public key. The public key, used for encoding, is a pair of numbers (n, e)where n is the product of 2 large prime numbers (p•q=n). The value of e can be any number that is co-prime to (p-1)(q-1), meaning that the GCF of (p-1)(q-1) and e is 1. The private key (d), used for decoding, is the reciprocal of the least common multiple of (p-1)(q-1) and e and can also be found by solving the equation 1= d • e • (p-1)(q-1) for d.
For decades, RSA has provided security for digital data because large scale of (n, e) numbers in addition to the variability of e means that it is nearly impossible to decipher (p, q) from (n, e). However, quantum computing brings forth the ability to quickly factor large numbers, allowing (p, q) to be determined from just the public key.
2. Elliptic Curve Cryptography (ECC):
Since 1985, ECC algorithms have been favored over RSA’s due to their greater complexity and faster encryption, with ECC’s capabilities proving to be up to ten times faster. ECC algorithms use an elliptical curve of the form y2=x3+ax+b over a finite field of not necessarily real numbers (Fp). A field Fp includes numbers from 0 to p-1, where p is prime.
Figure 1: The elliptic curve
Figure 2: The elliptic curve over F11
For the purpose of illustration, let us take the elliptical equation y2=x3+13 and a field F11. Figure 1 shows the elliptical curve while figure 2 shows the solutions to y2 =x3+13 (mod 11). The order of the curve is the number of points, including the arbitrary one at infinity, that satisfy the equation over a specific field (12 points in figure 2). The private key is some value k between 1 and the order of the curve. The public key can be calculated by taking one of the points, called the generator point (G), and multiplying it by k (kG). This system then encrypts the information using the public key (kG) and can only be decrypted by those who know k.
For example, let us take a value of k=5 and the point (9,4) as the generator point (G). When we multiply 5G, we are given the point (9,7), which would be the public key. However, just given the 2 points, it is extremely difficult to find the value of k.
ECC algorithms have long been considered nearly unbreakable due to the elliptic curve discrete logarithm problem , or the ‘ECDLP’. The ECDLP is a mathematical problem that asks: Given two points (P, Q) on an elliptic curve, what operation or algorithms could be used to find the specific constant k such that k multiplied by P equals Q?
The key issue in solving this lies in point multiplication, where a tangent line is drawn to a point on the elliptical curve (P) as part of the operation. Wherever that line intersects the elliptical curve again is point Q’. When Q’ is reflected across the x-axis of the equation (not necessarily y=0), the result is Q which equates to 2P. This process is continued until KP is reached. While it is straightforward to find Q given P and K, it is nearly impossible to find K given P and Q because there is currently no known inverse operation to undo, or solve for the coefficient in point multiplication.
Ultimately, RSA and ECC algorithms are what encrypt all of digital data and communication. They keep everything secure from classified government data to something as simple as a text message. Encryption allows private information to remain private and large national or international systems to continue functioning. It acts as a barrier against bad actors looking to hack or exploit this private data. Without encryption, there would be no safeguard for any data. Imagine if everything you ever put on a device, whether private photos or bank information, suddenly became public. You would no longer be able to trust digital privacy and safety if these algorithms were to fail.
To understand the momentous advancements in quantum computing, it is important to take a step back and examine the field’s origins as well as how quantum mechanics have evolved over time. Written in 1900 by Max Planck, the ‘Quantum Hypothesis’ explored the idea that rather than the conventionally accepted continuously flowing energy, energy was actually emitted in non-connected packets called quanta. His work laid the foundation for an exploration into what has become the field of quantum mechanics. Both Einstein’s 1905 work on the Photoelectric effect and Niels Bohr’s 1913 work on the atom further supported this claim by suggesting quantum leaps and the particle-like behaviors of a photon.
In 1927, Heisenberg formulated his uncertainty principle, which stated that it is impossible to simultaneously know the position and the speed of a particle with perfect accuracy. Einstein, Podolsky, and Rosen each published various works in 1935, questioning quantum mechanics via entanglement, or the influence of the state of one particle on the state of another simultaneously over great distances. Recent works have shown that entanglement can connect particles even between a satellite and the Earth. John Bell later proved entanglement by conducting experiments in search of violations of the Bell inequalities in 1964.
In 1926 Schrodinger created a system of wave equations that accurately predicted the energy levels of electrons in atoms. Neumann built on this alongside Hilbert’s work to create the mathematical framework for quantum mechanics, formalizing quantum states and creating a method to understand the behavior of quantum systems. In the 1940s Feynman, Schwinger, and Tomonaga developed their theory of Quantum Electrodynamics (QED) which described the interactions of light and matter.
The 1980 conference of physicists, mathematicians, and computer scientists was the turning point from quantum theories into quantum applications, laying the foundation for all of quantum computing. While the first working laser was created in the 1950s, quantum mechanics was not explored much further untilPaul Benioff’s 1980 description of a quantum computer,the first step towards quantum computing.
Quantum Computing: What is it and how does it work?
Superposition: The state of being in multiple states or places at once. Superposition is mostly commonly seen with overlaps of waves, but at a quantum level can be understood as a particle being in both state 1 and state 0 at the same time. However, when measured these particles must settle at either state 1 or state 0. The most commonly known analogy to explain this is the Schrodinger’s cat analogy: If you were to put a cat inside of a box with a substance that has an equal chance of killing or not killing the cat within an hour, then after one hour you could say that the cat is both dead and alive until you measure it, at which point it must be either dead or alive.
Entanglement: A phenomenon by which two particles become connected such that the fate of one affects the other, irrespective of the distance between the two. Prior to any measurement, two particles will always be in a state of superposition, meaning that the particles can be in both state 0 and state 1 at the same time. However, when measured, the state of one particle will directly affect the state of the other. This principle was proven by John Bell via the Bell inequalities.
Quantum computing allows storage of more information and more efficient processes, creating opportunities to infinitely increase the rate at which many modern machines work. While they face setbacks in these developing stages, they make it possible to perform multiple simultaneous operations rather than being limited by the tunnel effect that limits most modern machines to straightforward operations.
Quantum systems use qubits as the fundamental unit of information transfer instead of the traditional bit. Qubits allow for the superposition of ones and zeros making it possible for quantum computers with very few qubits to perform billions of operations per second, over a million times faster than the best computers on the market today. In addition, the entanglement of multiple qubits means that information capacity grows exponentially rather than linearly.
Compare and Contrast: Quantum Computers vs. Traditional Computers
The Quantum Threat to Cryptography
While current computers may not be strong enough to carry out an attack on cryptography, the emerging field of quantum computing poses a risk to all of modern encryption.
Is the threat just theoretical?
Even as an emerging technology, quantum computing poses a very real threat to cryptography. While many people would be more than willing to write it off as a threat of the future, that future may be closer than you believe. Quantum computing has shown its strength through many algorithms which could potentially result in the compromisation of sensitive data.
The most prominent algorithm in regards to cryptography is Shor’s ‘Factoring Algorithm’ from 1994. Specifically, Shor’s Factoring Algorithm (SFA) is a major threat to RSA cryptography systems. As I mentioned earlier, RSA systems rely on the creation of large numbers as the product of two prime numbers, basing security over the inability to efficiently factor those numbers.
According to Thorsten Kleinjung of the University of Bonn, it would take around two years to factor N = 135066410865995223349603216278805969938881475605667027524485 14385152651060485953383394028715057190944179820728216447155137368041970396419174 304649658927425623934102086438320211037295872576235850964311056407350150818751067 6594629205563685529 475213500852879416377328533906109750544334999811150056977236 890927563 with under 2 GB of memory.
Shor’s Algorithm could exponentially speed this up by working as follows:
Start with the large number (N) and a guess (g). If g is a factor of N or shares a factor with N then we have already found the factors.
If g is foreign to N, then we use the property that for any 2 prime numbers (a,b) there exists one power (n) and one multiple (m) such that an= mb+1. Applying this here we get gn= mN + 1. We can further rewrite this as (gn/2-1)(gn/2+1)= mN. We can now change our objective from searching for values of g to searching for values of n.
This is where quantum computing makes a vital difference. By testing many possible values of n, the quantum system starts in a superposition of states. After attempting to solve for n using the above equation (mod N), we begin to take advantage of the fact that if gx mod(N) = r then gx+pmod(N) =r if p is the period of the equation ( gp=1). When we utilize superposition, we test to see what values of x produce the same remainder, as the distance between those x values will be the period.
We can derive from this the frequency (f=1/p)
Here we can apply a Quantum Fourier Transform (similar to a classical Fourier Transform): When we absorb all the constructive and destructive interference of the superposition, 1/p is the remaining frequency.
Now that we have a candidate for p, we calculate our best guess for gp and iterate as necessary to correct quantum error.
Aside from algorithms, many corporations have made recent advancements towards building quantum computers as well. As recently as June 2025, Nord Quantique, a Canadian startup, announced their breakthrough ‘bosonic qubit’ which has built in error correction. This creates the potential to produce successful, encryption breaking 1000-qubit machines by 2031, far more efficient than the previously estimated 1 million-qubits.
The ‘Harvest Now, Decrypt Later’ Tactic
Another major reason why quantum mechanics is a threat to cryptography includes the ‘harvest now, decrypt later’ (HNDL) tactic. As the predicted Q-day nears (2035), threatening actors have begun to collect and store encrypted data, with the goal of decrypting it in the future with sufficiently powerful quantum machines. The attackers may not be able to decrypt the data, but they can intercept communications to steal encrypted data.
While it is easy to dismiss these attacks as something that could only be effective on nation-state levels, this assumption only feeds a false sense of security. For bad actors, corporate information could enable them to threaten economic chaos and large-scale disruptions. In fact, experts believe that these attacks have become increasingly focused on businesses as they hold the people’s data and the power to create mass economic instability.
Matthew Scholl, Chief of the Computer Science at NIST described the threat by saying,
“Imagine I send you a message that’s top secret, and I’ve encrypted it using this type of encryption, and that message is going to need to stay top secret for the next 20 years. We’re betting that an adversary a) hasn’t captured that message somehow as we sent it over the internet, b) hasn’t stored that message, and c) between today and 20 years from now will not have developed a quantum machine that could break it. This is what’s called the store-and-break threat.”
The most concerning aspect of these HNDL attacks is that it is nearly impossible to know when your data has been stolen, until it comes into effect with the rise of quantum computing. By then, the damage will be irreversible. While not all data will be of high value over a decade from now, attackers are threatening specific data that they believe will hold long-term value.
Over the past 10 years, incidents have arisen that resemble HNDL attacks:
In 2016, Canadian internet traffic to South Korea, was being rerouted through China
In 2020, data from many large online platforms was rerouted through Russia
A study by HP’s Wolf Security discovered that one third of the cyber attacks conducted by nation-states between 2017 and 2020 were aimed at businesses
Post Quantum Cryptography ( PQC)
However, companies and nations have already begun to look into ways to protect data from quantum attacks. Post-Quantum encryption algorithms focus on encrypting data in a way that will be equally difficult for quantum machines to break as it is for the classic computer.
The Deputy Secretary of US Commerce, Don Graves said,
“The advancement of quantum computing plays an essential role in reaffirming America’s status as a global technological powerhouse and driving the future of our economic security. Commerce bureaus are doing their part to ensure U.S. competitiveness in quantum, including the National Institute of Standards and Technology, which is at the forefront of this whole-of-government effort. NIST is providing invaluable expertise to develop innovative solutions to our quantum challenges, including security measures like post-quantum cryptography that organizations can start to implement to secure our post-quantum future. As this decade-long endeavor continues, we look forward to continuing Commerce’s legacy of leadership in this vital space.”
One example of a potentially powerful PQC algorithm is CRYSTALS-Kyber, which the NIST declared the best for general encryption in 2022. They added HQC to their list of PQC algorithms in 2024, giving us a grand total of five algorithms that have met the standard.
The NIST has named their standards for PQCs and urges people to work towards incorporating them now, because the full shift to PQCs may take as long as developing those quantum computers will take. Their key goals in this endeavor are to not only find algorithms that are resistant to quantum computing, but to diversify the types of mathematics involved to mitigate the risk of compromised data. They search for algorithms that are both able to be easily implemented and improved so that they maintain a ‘crypto-agility’.
Many companies support PQCs and believe that they will safeguard the future of cryptography. Whitfield Diffie, cryptography expert, explains that
“One of the main reasons for delayed implementation is uncertainty about what exactly needs to be implemented. Now that NIST has announced the exact standards, organizations are motivated to move forward with confidence.”
Companies such as Google, Microsoft, IBM, and AWS are actively working to develop better resistance to quantum threats, helping to build some of the most powerful PQC algorithms. IBM is currently advocating for a Cryptography Bill of Materials (CBOM), a new standard to keep tabs on cryptographic assets and introduce more oversight into the system. Microsoft has become one of the founding members of the PQC Coalition, a group whose mission is to step forward and provide valuable outreach alongside education to support the shift towards PQC as the primary form of encryption.
While PQCs could be a valuable resource against quantum threats, there are still setbacks that make people question the validity of the whole effort. The Supersingular Isogeny Key Exchange (SIKE) algorithm, one of the NIST finalists for the PQC standard, failed due to a successful attack by a classical computer, rendering many of the fundamental mathematical assumptions false. In addition, many of these algorithms suffer due to a lack of extensive testing and uncertainty regarding how much quantum machines will actually be able to accomplish.
Conclusion
While the timeline of PQC development might be uncertain, it is imperative that we work now. Quantum computing is no longer a threat looming in the future, but a present reality with significant impacts.It is imperative that we begin shifting towards these safer systems as a community. We cannot wait until the threat has come, we need to prepare now.
Rob Joyce, the Director of the National Security Administration’s Cybersecurity has stated that,
“The transition to a secured quantum computing era is a long-term intensive community effort that will require extensive collaboration between government and industry. The key is to be on this journey today and not wait until the last minute.”
Above all, it is crucial to recognize the threat and take action. Educating the people is the first step towards group action. Let awareness be our first line of defense.