America’s CDC is combining viral and antibody tests,

‘How Could the CDC Make That Mistake?’

Two swabs forming an ex
GETTY / THE ATLANTIC

Editor’s Note: The Atlantic is making vital coverage of the coronavirus available to all readers. Find the collection here.

The Centers for Disease Control and Prevention is conflating the results of two different types of coronavirus tests, distorting several important metrics and providing the country with an inaccurate picture of the state of the pandemic. We’ve learned that the CDC is making, at best, a debilitating mistake: combining test results that diagnose current coronavirus infections with test results that measure whether someone has ever had the virus. The upshot is that the government’s disease-fighting agency is overstating the country’s ability to test people who are sick with COVID-19. The agency confirmed to The Atlantic on Wednesday that it is mixing the results of viral and antibody tests, even though the two tests reveal different information and are used for different reasons.

 

This is not merely a technical error. States have set quantitative guidelines for reopening their economies based on these flawed data points.

Several states—including Pennsylvania, the site of one of the country’s largest outbreaks, as well as Texas, Georgia, and Vermont—are blending the data in the same way. Virginia likewise mixed viral and antibody test results until last week, but it reversed course and the governor apologized for the practice after it was covered by the Richmond Times-Dispatch and The Atlantic. Maine similarly separated its data on Wednesday; Vermont authorities claimed they didn’t even know they were doing this.  

The widespread use of the practice means that it remains difficult to know exactly how much the country’s ability to test people who are actively sick with COVID-19 has improved.

 

“You’ve got to be kidding me,” Ashish Jha, the K. T. Li Professor of Global Health at Harvard and the director of the Harvard Global Health Institute, told us when we described what the CDC was doing. “How could the CDC make that mistake? This is a mess.”

Viral tests, taken by nose swab or saliva sample, look for direct evidence of a coronavirus infection. They are considered the gold standard for diagnosing someone with COVID-19, the disease caused by the virus: State governments consider a positive viral test to be the only way to confirm a case of COVID-19. Antibody tests, by contrast, use blood samples to look for biological signals that a person has been exposed to the virus in the past.

A negative test result means something different for each test. If somebody tests negative on a viral test, a doctor can be relatively confident that they are not sick right now; if somebody tests negative on an antibody test, they have probably never been infected with or exposed to the coronavirus. (Or they may have been given a false result—antibody tests are notoriously less accurate on an individual level than viral tests.) The problem is that the CDC is clumping negative results from both tests together in its public reporting.

 

Mixing the two tests makes it much harder to understand the meaning of positive tests, and it clouds important information about the U.S. response to the pandemic, Jha said. “The viral testing is to understand how many people are getting infected, while antibody testing is like looking in the rearview mirror. The two tests are totally different signals,” he told us. By combining the two types of results, the CDC has made them both “uninterpretable,” he said.

The public-radio station WLRN, in Miami, first reported that the CDC was mixing viral and antibody test results. Pennsylvania’s and Maine’s decisions to mix the two tests have not been previously reported.

Kristen Nordlund, a spokesperson for the CDC, told us that the inclusion of antibody data in Florida is one reason the CDC has reported hundreds of thousands more tests in Florida than the state government has. The agency hopes to separate the viral and antibody test results in the next few weeks, she said in an email.

 

But until the agency does so, its results will be suspect and difficult to interpret, says William Hanage, an epidemiology professor at Harvard. In addition to misleading the public about the state of affairs, the intermingling “makes the lives of actual epidemiologists tremendously more difficult.”

“Combining a test that is designed to detect current infection with a test that detects infection at some point in the past is just really confusing and muddies the water,” Hanage told us.

The CDC stopped publishing anything resembling a complete database of daily test results on February 29. When it resumed publishing test data last week, a page of its website explaining its new COVID Data Tracker said that only viral tests were included in its figures. “These data represent only viral tests. Antibody tests are not currently captured in these data,” the page said as recently as May 18.

Yesterday, that language was changed. All reference to disaggregating the two different types of tests disappeared. “These data are compiled from a number of sources,” the new version read. The text strongly implied that both types of tests were included in the count, but did not explicitly say so.

 

The CDC’s data have also become more favorable over the past several days. On Monday, a page on the agency’s website reported that 10.2 million viral tests had been conducted nationwide since the pandemic began, with 15 percent of them—or about 1.5 million—coming back positive. But yesterday, after the CDC changed its terms, it said on the same page that 10.8 million tests of any type had been conducted nationwide. Yet its positive rate had dropped by a percent. On the same day it expanded its terms, the CDC added 630,205 new tests, but it added only 52,429 positive results.

This is what concerns Jha. Because antibody tests are meant to be used on the general population, not just symptomatic people, they will, in most cases, have a lower percent-positive rate than viral tests. So blending viral and antibody tests “will drive down your positive rate in a very dramatic way,” he said.

The absence of clear national guidelines has led to widespread confusion about how testing data should be reported. Pennsylvania reports negative viral and antibody tests in the same metric, a state spokesperson confirmed to us on Wednesday. The state has one of the country’s worst outbreaks, with more than 67,000 positive cases. But it has also slowly improved its testing performance, testing about 8,000 people in a day. Yet right now it is impossible to know how to interpret any of its accumulated results.

 

Texas, where the rate of new COVID-19 infections has stubbornly refused to fall, is one of the most worrying states (along with Georgia). The Texas Observer first reported last week that the state was lumping its viral and antibody results together. On Tuesday, Governor Greg Abbott denied that the state was blending the results, but the Dallas Observer reports that it is still doing so.

While the number of tests per day has increased in Texas, climbing to more than 20,000, the combined results mean that the testing data are essentially uninterpretable. It is impossible to know the true percentage of positive viral tests in Texas. It is impossible to know how many of the 718,000 negative results were not meant to diagnose a sick person. The state did not return a request for comment, nor has it produced data describing its antibody or viral results separately. (Some states, following guidelines from the Council of State and Territorial Epidemiologists, report antibody-test positives as “probable” COVID-19 cases without including them in their confirmed totals.)

Georgia is in a similar situation. It has also seen its COVID-19 infections plateau amid a surge in testing. Like Texas, it reported more than 20,000 new results on Wednesday, the majority of them negative. But because, according to The Macon Telegraph, it is also blending its viral and antibody results together, its true percent-positive rate is impossible to know. (The governor’s office did not return a request for comment.)

 

These results damage the public’s ability to understand what is happening in any one state. On a national scale, they call the strength of America’s response to the coronavirus into question. The number of tests conducted nationwide each day has more than doubled in the past month, rising from about 147,000 a month ago to more than 413,000 on Wednesday, according to the COVID Tracking Project at The Atlantic, which compiles data reported by state and territorial governments. In the past week, the daily number of tests has grown by about 90,000.

At the same time, the portion of tests coming back positive has plummeted, from a seven-day average of 10 percent at the month’s start to 6 percent on Wednesday.

“The numbers have outstripped what I was expecting,” Jha said. “My sense is people are really surprised that we’ve moved as much as we have in such a short time period. I think we all expected a move and we all expected improvement, but the pace and size of that improvement has been a big surprise.”

 

The intermingling of viral and antibody tests suggests that some of those gains might be illusory. If even a third of the country’s gain in testing has come by expanding antibody tests, not viral tests, then its ability to detect an outbreak is much smaller than it seems. There is no way to ascertain how much of the recent increase in testing is from antibody tests until the most populous states in the country—among them Texas, Georgia, and Pennsylvania—show their residents everything in the data.

Chinese Lab Believes New Drug Can “Stop Pandemic Even Without Vaccine”

Chinese Lab Believes New Drug Can “Stop Pandemic Even Without Vaccine”

Coronavirus Vaccine: The outbreak first emerged in China late last year before spreading across the world, prompting an international race to find treatments and vaccines.

 
Updated: May 19, 2020 01:48 pm IST
Chinese Lab Believes New Drug Can 'Stop Pandemic Even Without Vaccine'

China, COVID-19 Vaccine: The laboratory says it has developed a drug that can bring the virus to a halt

Beijing, China:

A Chinese laboratory has been developing a drug it believes has the power to bring the coronavirus pandemic to a halt.

The outbreak first emerged in China late last year before spreading across the world, prompting an international race to find treatments and vaccines.

A drug being tested by scientists at China’s prestigious Peking University could not only shorten the recovery time for those infected, but even offer short-term immunity from the virus, researchers say. 

Sunney Xie, director of the university’s Beijing Advanced Innovation Center for Genomics, told AFP that the drug has been successful at the animal testing stage.

“When we injected neutralising antibodies into infected mice, after five days the viral load was reduced by a factor of 2,500,” said Xie.

“That means this potential drug has (a) therapeutic effect.”

The drug uses neutralising antibodies — produced by the human immune system to prevent the virus infecting cells — which Xie’s team isolated from the blood of 60 recovered patients. 

A study on the team’s research, published Sunday in the scientific journal Cell, suggests that using the antibodies provides a potential “cure” for the disease and shortens recovery time.

 

Xie said his team had been working “day and night” searching for the antibody.

“Our expertise is single-cell genomics rather than immunology or virology. When we realised that the single-cell genomic approach can effectively find the neutralising antibody we were thrilled.”

He added that the drug should be ready for use later this year and in time for any potential winter outbreak of the virus, which has infected 4.8 million people around the world and killed more than 315,000.

“Planning for the clinical trial is underway,” said Xie, adding it will be carried out in Australia and other countries since cases have dwindled in China, offering fewer human guinea pigs for testing.

“The hope is these neutralised antibodies can become a specialised drug that would stop the pandemic,” he said.

China already has five potential coronavirus vaccines at the human trial stage, a health official said last week.

But the World Health Organization has warned that developing a vaccine could take 12 to 18 months.

 

Scientists have also pointed to the potential benefits of plasma — a blood fluid — from recovered individuals who have developed antibodies to the virus enabling the body’s defences to attack it.

More than 700 patients have received plasma therapy in China, a process which authorities said showed “very good therapeutic effects”.

“However, it (plasma) is limited in supply,” Xie said, noting that the 14 neutralising antibodies used in their drug could be put into mass production quickly.

Prevention and cure 

Using antibodies in drug treatments is not a new approach, and it has been successful in treating several other viruses such as HIV, Ebola and Middle East Respiratory Syndrome (MERS). 

Xie said his researchers had “an early start” since the outbreak started in China before spreading to other countries.

Ebola drug Remdesivir was considered a hopeful early treatment for COVID-19 — clinical trials in the US showed it shortened the recovery time in some patients by a third — but the difference in mortality rate was not significant.

The new drug could even offer short-term protection against the virus.

The study showed that if the neutralising antibody was injected before the mice were infected with the virus, the mice stayed free of infection and no virus was detected.

This may offer temporary protection for medical workers for a few weeks, which Xie said they are hoping to “extend to a few months”.

More than 100 vaccines for COVID-19 are in the works globally, but as the process of vaccine development is more demanding, Xie is hoping that the new drug could be a faster and more efficient way to stop the global march of the coronavirus.

“We would be able to stop the pandemic with an effective drug, even without a vaccine,” he said.

(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)

India has lost its spot as the world’s fastest-growing major economy after it grew more slowly than expected in the first three months of 2019.

 

Official data showed the economy grew 5.8%, which is slower than the 6.4% growth registered by China, and down from 6.6% in the previous quarter.

It was the first set of figures since Narendra Modi secured a second election victory this month.

During his first term, India became the world’s fastest-growing economy.

But the latest figures, which show the slowest growth rate in 17 quarters, mean India’s growth rate has fallen behind China for the first time in nearly two years.

Economists had been expecting the economy to grow 6.3% in the first quarter.

The figures will put pressure on Prime Minister Modi’s government and the central bank to provide stimulus to the economy through fiscal measures.

Unlike China, India’s economic growth has been driven by domestic consumption over the last 15 years and the latest data showed weaker consumer demand.

There was also a slower growth in investment, which fell to 3.6% from 10.6% in the previous quarter.

Finance secretary Subhash Chandra Garg said that the next quarter – the three months of April to June – could also be “relatively slower”.

But, it should start to turnaround in the third quarter, he said.

‘Jobs crisis’

The Reserve Bank of India is now expected to cut interest rates next month.

Unemployment data, that has been repeatedly delayed, was also issued on Friday and showed a rate of 6.1% in the 2017/18 fiscal year.

“It shows the jobs crisis is still there: there are not many new ones being created,” said NR Bhanumurthy, a professor at the National Institute of Public Finance and Policy in New Delhi.

 

There has not been detailed official data on unemployment for several year and chief statistician Pravin Srivastava told reporters “it would be unfair to compare it with the past.”

Israeli Academics use power units to leak secrets from isolated computers

POWER-SUPPLaY technique uses “singing capacitor” phenomenon for data exfiltration.

 
computer power supply unit
Image: IgorShubin
 

Academics from an Israeli university have published new research last week showing how an attacker could turn a computer’s power supply unit into a rudimentary speaker that can secretly transmit data from an infected host using audio waves.

The technique, named POWER-SUPPLaY, is the work of Mordechai Guri, the head of R&D at the Ben-Gurion University of the Negev, in Israel.

Over the last half-decade, Guri has been pioneering research into new covert data exfiltration channels. The techniques Guri has been developing can be used for stealing data through unconventional means.

Guri has been developing these techniques specifically for extracting data from air-gapped systems — computers isolated on local networks with no internet access.

Such computers are often used on government or corporate networks to store sensitive data, such as classified files or intellectual property.

 

Air-gapped systems are protected by several layers of defenses, on top of the “air gap,” and you need novel data transmission techniques to go around these defenses.

For example, some air-gapped systems don’t have speakers, because it’s been proven in the past that speakers could be abused to leak information from a secure system using inaudible sound waves.

MEET POWER-SUPPLAY, A NEW DATA EXFILTRATION TECHNIQUE

 

In a research paper shared with ZDNet yesterday, Guri said the POWER-SUPPLaY technique was developed for these types of situations, where speakers have been removed from air-gapped systems.

By using specially-crafted malware, the Israeli researcher says that a power supply unit (PSU) can be transformed into a very simple speaker capable of emitting the most basic of audio waves.

[Guri has only developed and studied the data exfiltration technique. Guri’s work does not focus on planting the malware on air-gapped systems or getting near to an air-gapped system to steal data. This is out of scope of his project.]

Guri says the trick behind the POWER-SUPPLaY technique is to manipulate power inside a PSU’s capacitors to trigger a “singing capacitor phenomenon.”

This phenomenon generates acoustic waves when current passes through a capacitor at various frequencies. By controlling the power frequencies, the POWER-SUPPLaY malicious code can also control the audio waves, and hence, hide data inside it.

singing-capacitor.png
Image: Mordecai Guri

“Our technique works with various types of systems: PC workstations and servers, as well as embedded systems and IoT devices that have no audio hardware.

“Binary data can be modulated and transmitted out via the acoustic signals. The acoustic signals can then be intercepted by a nearby receiver (e.g., a smartphone), which demodulates and decodes the data and sends it to the attacker via the Internet,” Guri added.

The main advantage of the POWER-SUPPLaY technique is that the malware doesn’t need any special privileges.

“The transmitting code can be initiated from an ordinary user-space process and is highly evasive,” Guri said.

POWER-SUPPLAY CAN BROADCAST DATA UP TO SIX METERS AWAY

The downside is that the attack is not extremely fast, can’t transmit data over big distances, and is subject to background noise that may impact the transmission’s quality, making exfiltration in some scenarios almost impossible.

Guri said that the distance at which POWER-SUPPLaY works usually depends on the PSU brand and the bitrate and frequency bands at which the stolen data is encoded and then transmitted via acoustic signals.

The Israeli academic said that experiments have shown that POWER-SUPPLaY exfiltration speeds can vary between 0-40 bits/sec at short distances of up to 1 meter or 0-10 bits/sec when the data needs to travel for more than 2 meters. The maximum transmission distance recorded in the experiment was 6 meters.

Guri said that the first method can be reliably be used to transmit binary data, keystrokes logs, text files, and so on, while the slower bit rates could be used to transfer a small amount of data, such as short texts, encryption keys, passwords, and keystrokes.

Basically, the closer an attacker can place a smartphone to record the sounds coming from the infected computer, the better the speed and lower the transmission error rates.

Additional details about the technique and possible countermeasures are available in a research paper titled “POWER-SUPPLaY: Leaking Data from Air-Gapped Systems by Turning the Power-Supplies Into Speakers.”

Guri’s team has previously also worked on other air-gapped data exfiltration techniques, such as:

  • LED-it-Go – exfiltrate data from air-gapped systems via an HDD’s activity LED
  • USBee – force a USB connector’s data bus give out electromagnetic emissions that can be used to exfiltrate data
  • AirHopper – use the local GPU card to emit electromagnetic signals to a nearby mobile phone, also used to steal data
  • Fansmitter – steal data from air-gapped PCs using sounds emanated by a computer’s GPU fan
  • DiskFiltration – use controlled read/write HDD operations to steal data via sound waves
  • BitWhisper – exfiltrate data from non-networked computers using heat emanations
  • Unnamed attack – uses flatbed scanners to relay commands to malware infested PCs or to exfiltrate data from compromised systems
  • GSMem – steal data from air-gapped systems using GSM cellular frequencies
  • xLED – use router or switch LEDs to exfiltrate data
  • aIR-Jumper – use a security camera’s infrared capabilities to steal data from air-gapped networks
  • HVACKer – use HVAC systems to control malware on air-gapped systems
  • MAGNETO & ODINI – steal data from Faraday cage-protected systems
  • MOSQUITO – steal data from PCs using attached speakers and headphones
  • PowerHammer – steal data from air-gapped systems using power lines
  • CTRL-ALT-LED – steal data from air-gapped systems using keyboard LEDs
  • BRIGHTNESS – steal data from air-gapped systems using screen brightness variations
  • AiR-ViBeR – steal data using a computer’s fan vibrations

Categorized based on the exfiltration channels, these look like:

power-supplay-categories.png
Image: Mordecai Guri
 

The UAE is going to Mars. Here’s the plan for its Hope orbiter.

By  

The Hope spacecraft as seen during preparations for launch.

The Hope spacecraft as seen during preparations for launch.
(Image: © MBRSC)

The United Arab Emirates had its sights set on Mars the day before it launched its second satellite ever.

The resulting mission, a Mars orbiter dubbed Hope, has finished construction and is scheduled to launch this summer among a rush of spacecraft bound for the Red Planet. If all goes well, the UAE will become the fourth or fifth country to orbit Mars next February. While the country’s newly minted scientists are dedicated to learning something new about Mars, inquiry wasn’t the motivation for the mission.

“Going to Mars was not the main objective,” Omran Sharaf, mission lead for the Hope spacecraft, which is also known as the Emirates Mars Mission, told Space.com. “It’s a means for a bigger goal: to expedite the development in our educational sector, academic sector.”

Related: Meet Hope: The UAE’s first spacecraft bound for Mars is now complete

 
CLOSE
 
 
 
 
 
 
 
 
Volume 0%
 
 
PLAY SOUND

 
 

Sharaf first heard about the Mars plan in November 2013, as he was preparing to see his prior project, an Earth-observing satellite called DubaiSat-2, launch. The Hope mission began with an order from UAE Prime Minister Mohammed bin Rashid Al Maktoum: Send a UAE-built science probe to Mars before the country’s 50th anniversary in December 2021, within a set budget.

There were other constraints too: “The science needs to be unique. Whatever you do, it shouldn’t be something that’s been done before,” Sharaf said the prime minister directed. “We should build it and not buy it, and work with others, don’t start from scratch.”

Jumping from Earth orbit to Mars in less than a decade is quite a leap, but a purposeful one for the UAE, which turned its gaze to spaceflight in 2006. “Only 50% of those missions [to Mars] succeed,” Sarah Al Amiri, science lead for the mission and the UAE’s minister of state for advanced sciences, told Space.com. “It provides the mindset that the UAE needed to have in youth who are going to be a vital part of the UAE’s post-oil economy. It’s about expanding their horizons and putting them in challenges at a time when the UAE is relatively comfortable as a nation.”

The mission is, after all, timed to celebrate the nation’s 50th anniversary. “We need to catch up with nations that have been around for centuries,” Al Amiri said.

 

Figuring out the science

 

As the Hope mission grew from proposal to project, team members began building connections with Mars scientists who could help them shape the goals of the spacecraft and with seasoned Mars-goers who could teach them how to design and build the probe.

 

For the latter, the UAE space center signed on with the University of Colorado Boulder’s Laboratory for Atmospheric and Space Physics, impressed by center members’ experience with Mars missions and demonstrated ability to foster less experienced team members into full-fledged participants.

 

For help with the science side of things, UAE representatives reached out to the Mars Exploration Program Analysis Group, an advisory group convened by NASA that includes scientists from around the world. In particular, the Hope team members needed to identify a scientifically valuable task that was feasible for this mission.

 

“We’re not there just to declare arrival to Mars,” Al Amiri said. “It doesn’t really make sense to call it planetary exploration and just make it about technology demonstration and about arrival.”

 

From that consultation, Al Amiri and her colleagues found a mission for the mission. “There was a large gap in the complete understanding of the atmosphere of Mars,” she said. “We don’t have a full understanding of the weather system of Mars throughout an entire year.”

 

Previous and current missions have gathered observations of the Martian weather, but only a couple of times throughout a day, Al Amiri said. These measurements have often come from surface missions and so are quite limited geographically. Weather is too complex and interconnected for scientists to really get a handle on how it works from such piecemeal data.

 

 
 
 
 
 
 
 
United Arab Emirates Planning Mars Mission
 
 
 
Volume 0%
 
 
PLAY SOUND

 
 

So, Hope aims to monitor what’s happening in the Martian atmosphere for a full local year, including making connections between layers of the atmosphere. That will help scientists understand not only Martian weather, but also how Mars has lost some of its atmosphere over billions of years of planetary history.

 

“That science was a sweet spot for us,” Al Amari said. “You’re complementary to other current missions, so you maximize the benefit that scientists are going to get globally from this mission, because it feeds into the current areas of research and expands on human knowledge as a whole.”

 

The goal of integrating data collected across layers of the Martian atmosphere means that Hope’s three instruments — a camera sensitive to optical and ultraviolet wavelengths and spectrometers tuned to infrared and to ultraviolet light — need to take simultaneous measurements that scientists can stack together.

 

To facilitate that process, Hope carries all its instruments on the same arm. The spacecraft also has a precisely tuned 55 hour–long orbit that enables two different views of Mars: One in which the planet rotates beneath the spacecraft, and one in which the spacecraft keeps pace and watches the same spot over time, Al Amiri said. That combination of views should make it easier for scientists to put together a complete map of the Martian atmosphere, she said.

Emirati personnel work with the Hope spacecraft during thermal vacuum testing.

Emirati personnel work with the Hope spacecraft during thermal vacuum testing. (Image credit: MBRSC)

Coronavirus management

Hope is scheduled to launch in late July or early August — the same window being targeted by NASA’s Perseverance rover and China’s first Mars mission, Tianwen. Until March, the European Space Agency and Russia were also in on the Red Planet rush with their joint ExoMars mission. But parachute problems had plagued the ExoMars mission for months, and with the travel restrictions prompted by the coronavirus pandemic, mission personnel weren’t confident they could be addressed in time. The mission is now scheduled to launch in 2022.

The Hope mission has also had to contend with pandemic complications, Sharaf said. “The coronavirus has definitely brought a different level of challenge into all this,” he said. “We thought that this phase would be the quiet phase, that we would be preparing for the launch, and it turns out that it’s not really the quiet phase because of COVID-19.”

As of late April, the spacecraft has arrived in Japan, from where it will launch aboard a Mitsubishi Heavy Industries H-IIA rocket. The spacecraft headed over earlier than originally planned to ensure it didn’t get caught in a pandemic-related hold-up.

 

The team also had to adjust its staffing plans. Rather than having teams bounce between the UAE and Japan, the agency sent a smaller team to Japan early enough that they would clear quarantine when the spacecraft arrived; those personnel will remain for three months straight.

 

Even still, something could go wrong, Sharaf said, potentially interfering with the launch. If everything goes smoothly, Hope should reach Mars sometime in February — the exact date hasn’t been set yet. And if everything continues to go smoothly, the spacecraft will spend about two terrestrial years at work around the Red Planet, with a potential mission extension likely to be evaluated a year into orbit.

 

“I never even fathomed the minute possibility of doing this, I don’t think any of us would have proposed to even say that ‘OK, let’s build a mission to Mars,'” Al Amiri said. “We would have thought incrementally, incrementally what’s the next best thing. But incrementally never works here. If you did not leap, if you didn’t push the bounds forward really, really fast you’re not able to overcome challenges.”

When asked how it felt to be looking ahead to the launch, especially with the US and China also targeting the Red Planet on the same time table, Sharaf said he is “Absolutely terrified, to be honest. Proud and optimistic, but terrified.”

Email Meghan Bartels at mbartels@space.com or follow her @meghanbartels. Follow us on Twitter @Spacedotcom and on Facebook.

 

A Harder Look at Alzheimer’s Causes and Treatments

Amyloid, the leading target for dementia therapy, faces skepticism after drug failures

 
A Harder Look at Alzheimer's Causes and Treatments
Credit: Galen Dara

In March 2019 biotechnology giant Biogen stopped two big trials of its experimental Alzheimer’s disease drug aducanumab because it did not appear to improve memory in declining patients. Then, in a surprise reversal several months later, the company and its partner, Japanese drugmaker Eisai, said they would ask the U.S. Food and Drug Administration to approve the treatment. A new analysis, Biogen said, showed that a subset of people on the highest doses in one trial did benefit from the compound, which dissolves clumps of a protein called beta-amyloid within the brain.

The back-and-forth decisions, along with the failure of a slew of other amyloid-clearing compounds, have left experts divided about whether treating amyloid buildup—long thought to be the best target for an Alzheimer’s therapy—is still a promising approach.

Some of the scientists rethinking the so-called amyloid hypothesis helped to generate it in the first place. “I would say it has legs, but it’s limping,” says geneticist John Hardy, who co-authored the genetic studies that pioneered the idea more than two decades ago. According to Hardy, who runs a molecular neuroscience program at University College London’s Institute of Neurology, “the [concept] we drew in 1998 is cartoonishly oversimplistic. There were lots of question marks. We thought those questions would be filled in within a couple of years. And yet 20 years later they are not filled in.” Other experts, though, still contend that the amyloid hypothesis is a strong explanation and that treatments targeting the protein are the right way to go.

Beta-amyloid forms when amyloid precursor protein (APP) is chopped up by the enzymes beta-secretase and gamma-secretase. The beta-amyloid fragments are normally broken down further. But in people with Alzheimer’s, beta-amyloid accumulates around neurons. In addition, tangles of another protein, tau, form within neurons. These changes are ultimately followed by cell death and brain degeneration, which prompted suspicions that beta-amyloid was a cause. And people with a particular genetic form of Alzheimer’s have mutations in one of three genes that code for APP and two components of gamma-secretase called presenilins. Their brain cells have trouble getting rid of beta-amyloid. Further evidence about amyloid came from individuals with Down syndrome, who have an extra copy of chromosome 21—which carries the gene for APP—and thus make more of the protein. These individuals also have a high risk of developing dementia by age 50. Such discoveries led scientists to infer that a faulty amyloid-clearing mechanism was to blame in the disease.

A New Era for Alzheimer's
Read more from this special report:

A New Era for Alzheimer’s

But the numerous drug failures have led some to reconsider the effectiveness of aiming therapies solely at amyloid. Beta-amyloid often accumulates for years before symptoms start, and not everyone who has this pathology goes on to develop the disease. In February two amyloid-targeting drugs, Eli Lilly’s solanezumab and Roche’s gantenerumab, failed in a clinical trial for an early-onset, genetic form of the disease thought to be directly tied to amyloid metabolism.

A convergence of research, including work from the Alzheimer’s Disease Cooperative Study, supported by the U.S. National Institute on Aging, suggests that amyloid buildup is just one part of a complex cascade of interactions. “Our experiences with a variety of interventions targeting amyloid clearly have brought us to [this] point,” says Howard Feldman, director of the cooperative study, which is a consortium of academic and government laboratories that conducts clinical trials of Alzheimer’s treatments. “It seems very difficult that a single amyloid intervention is going to stem the tide of the disease.” Although the hypothesis may be a good explanation for the early-onset, genetically driven forms of the disease, the late-onset form probably involves multiple problems, so approaches aimed only at amyloid are unlikely to work, says Feldman, who is also a professor and clinical neurologist at the University of California, San Diego.

Some researchers, such as Karen Duff of Columbia University, favor the idea that tau protein tangles play a part that is as big as or bigger than that of beta-amyloid. One reason is that the degree of tau pathology more closely correlates with the seriousness of cognitive symptoms than amyloid pathology does.

Other scientists think inflammation or defects in the blood-brain barrier may play a critical role. But drugs targeting tau and inflammation have so far been ineffective, Feldman notes. He believes that a combination of interventions might be the best approach: “A single intervention may never be sufficient, outside of genetic [early-onset] forms of disease.”

There are other ideas as well. In recent years Hardy and his colleagues have come to view late-onset Alzheimer’s and other neurodegenerative diseases as the result of a faulty damage response. They believe that the early accumulation of beta-amyloid might damage neuronal cell membranes, and if immune cells called microglia fail to remove these damaged membrane proteins, it could prevent the cell membranes from adequately clearing more amyloid—spurring a cycle of damage. Recent genome-sequencing studies support this idea, Hardy says: the majority of genes identified as risk factors in late-onset Alzheimer’s involve microglial metabolism; others encode proteins that help to build and repair cell membranes.

Some scientists still believe that amyloid has a primary role because of several studies linking its aggregation to the seriousness of symptoms. “In my view, the hypothesis is very much alive and well,” says David Holtzman, chair of neurology at the Washington University School of Medicine in St. Louis. “There’s no question that science says beta-amyloid is important in the disease. The question is, When can it serve as a treatment?”

Hardy, though more skeptical than he was decades ago, thinks that the hypothesis has strong data behind it, and he believes that amyloid drugs might yield poor results because they are given far too late in the disease’s progression. “If I was having a heart attack, a statin might be the right drug, but it’s too late,” he says. Clinicians may eventually be able to measure genetic, blood or spinal fluid biomarkers to predict who is at risk of developing Alzheimer’s, which would make it possible to treat them before they develop symptoms.

Others say amyloid’s real importance might be as one of those biomarkers. “I think amyloid is a critically important marker to understand risk and how early we can diagnose disease,” says Denise Park, chair in behavioral and brain sciences at the University of Texas at Dallas, who studies brain aging. “I don’t think there’s anything right now that is better.”

Going forward, it seems unlikely that the field will abandon the amyloid hypothesis. But scientists do seem, after a long time, poised to take a broader view of other processes at work in this destroyer of minds and memories.

This article was originally published with the title “The Amyloid Drug Struggle” in Scientific American 322, 5, 34-35 (May 2020)

doi:10.1038/scientificamerican0520-34

Scientists create glowing plants using mushroom genes

New technique could help shed light on plants’ workings – and lead to unusual home decor

Bioluminescence could help scientists explore the inner workings of plants
 Bioluminescence could help scientists explore the inner workings of plants. Photograph: Planta/MRC London Institute of Medical Sciences

Emitting an eerie green glow, they look like foliage from a retro computer game, but in fact they are light-emitting plants produced in a laboratory.

Researchers say the glowing greenery could not only add an unusual dimension to home decor but also open up a fresh way for scientists to explore the inner workings of plants.

“In the future this technology can be used to visualise activities of different hormones inside the plants over the lifetime of the plant in different tissues, absolutely non-invasively. It can also be used to monitor plant responses to various stresses and changes in the environment, such as drought or wounding by herbivores,” said Dr Karen Sarkisyan, the CEO of Planta, the startup that led the work, and a researcher at Imperial College London.

Advertisement

“We really hope to bring this to the market in a few years from now, once we make them a bit brighter, once we make the ornamental plants with this new technology, and once of course they pass all the existing safety regulations,” he added.

Numerous animals, microbes and mushrooms – from fireflies to honey fungi – can glow, a phenomenon known as bioluminescence. This occurs when enzymes act on chemicals known as luciferins within the organism, resulting in energy being released in the form of light. However, bioluminescence does not crop up naturally among plants.

The latest research is not the first time scientists have created glowing greenery – a development that has led to suggestions of everything from plant-based street lights to self-illuminating Christmas trees. Among previous approaches, researchers have delivered both luciferins and the enzymes necessary for bioluminescence into plants via nanoparticles, while other teams have incorporated bacterial genes for bioluminescence into plants.

However, these approaches have had drawbacks: the delivery of a luciferin on tiny particles is more expensive and is not self-sustained, while the incorporation of bacterial bioluminescence genes involves a cumbersome process that results in only a weak glow. What’s more, said Sarkisyan, the latter approach appears to be toxic to plants.

The new research takes a different tack, harnessing the recently discovered process by which fungi emit light. The team say that is important as the process involves a luciferin produced from a chemical that is naturally present in plants – caffeic acid.

Writing in the journal Nature Biotechnology, Sarkisyan and a team of colleagues based in Russia and Austria report how they inserted four genes from a bioluminescent mushroom called Neonothopanus nambi into the DNA of tobacco plants. These genes relate to enzymes that convert caffeic acid, through a series of steps, into a luciferin that emits energy as light, before turning the resulting substance back into caffeic acid.

The upshot is plants that glow with a greenish hue visible to the naked eye. “They glow both in the dark and in the daylight,” said Sarkisyan, adding that the light appeared to be 10 times brighter than that produced by using bacterial genes.

The team found the site of the luminescence changed as the plants grew, and luminescence generally decreased as leaves aged and increased where leaves became damaged. Flowers produced the most luminescence, the team report.

Glowing flowers.
Pinterest
 Glowing flowers. Photograph: Planta/MRC London Institute of Medical Sciences
Advertisement

Sarkisyan said that in the future the team could insert the fungal genes into the plant’s DNA near genes that were activated by certain hormones. “You should be able to see the light coming only from the tissues where the hormone is currently active,” he said.

Gary Foster, a professor in molecular plant pathology at the University of Bristol, who was not involved in the research, said the glowing plants would mainly be used by scientists rather than for applications such as plant-based street lights, but were nonetheless welcome.

“Many luminescent marker genes have up until now required special light sources and/or cameras to visualise the location of expression. The system reported here will make that process easier,” he said.

Prof John Carr, from the University of Cambridge, also welcomed the work, but said there was more to be done. “The challenge now is to figure out how to make this engineered bioluminescence responsive to specific environmental, developmental, chemical or pathogenic stimuli,” he said. “This is essential if the technique will, literally, be able to throw new light on fundamental biological processes.”

 

We’ve got an announcement …

… on our progress as an organisation. In service of the escalating climate emergency, we have made an important decision – to renounce fossil fuel advertising, becoming the first major global news organisation to institute an outright ban on taking money from companies that extract fossil fuels.

 

In October we outlined our pledge: that the Guardian will give global heating, wildlife extinction and pollution the urgent attention and prominence they demand. This resonated with so many readers around the world. We promise to update you on the steps we take to hold ourselves accountable at this defining point in our lifetimes. With climate misinformation rife, and never more dangerous than now, the Guardian’s accurate, authoritative reporting is vital – and we will not stay quiet.

 

We chose a different approach: to keep Guardian journalism open for all. We don’t have a paywall because we believe everyone deserves access to factual information, regardless of where they live or what they can afford to pay.

 

Our editorial independence means we are free to investigate and challenge inaction by those in power. We will inform our readers about threats to the environment based on scientific facts, not driven by commercial or political interests. And we have made several important changes to our style guide to ensure the language we use accurately reflects the environmental emergency.

 

The Guardian believes that the problems we face on the climate crisis are systemic and that fundamental societal change is needed. We will keep reporting on the efforts of individuals and communities around the world who are fearlessly taking a stand for future generations and the preservation of human life on earth. We want their stories to inspire hope.

 

We hope you will consider supporting us today. We need your support to keep delivering quality journalism that’s open and independent. Every reader contribution, however big or small, is so valuable. Support the Guardian from as little as $1 – and it only takes a minute. Thank you.

THE SCI-FI GURU WHO PREDICTED GOOGLE EARTH EXPLAINS SILICON VALLEY’S LATEST OBSESSION

“Snow Crash” author Neal Stephenson talks to the Hive about the differences between augmented and virtual reality, how to create a convincing Metaverse, and why social media is driving us insane.

This image may contain Neal Stephenson Face Human Person Tie Accessories Accessory Suit Coat Clothing and Overcoat
PHOTO BY BRADY HALL
 

Way back in 1992, author Neal Stephenson published his breakthrough novel, Snow Crash, a cyberpunk exploration of then-futuristic technologies: mobile computing, virtual reality, wireless Internet, digital currency, smartphones, and augmented-reality headsets. The book famously opens with a breakneck car chase as the main character, Hiro Protagonist (it’s something of a satire), races to deliver a pizza on time. It’s a literal life-or-death scene as our harried gig-economy driver races his GPS-enabled electric car through the streets of Los Angeles before he runs out the clock and risks angering the mob. TaskRabbit “independent contractors” can surely relate.

Twenty-five years later, Stephenson’s cult classic has become canon in Silicon Valley, where a host of engineers, entrepreneurs, futurists, and assorted computer geeks (including Amazon C.E.O. Jeff Bezos) still revere Snow Crash as a remarkably prescient vision of today’s tech landscape. Among the the more prophetic inventions in the book is something Stephenson called “the Metaverse”—the same sort of wireless, online virtual-reality experience that Facebook, Google, Samsung, and practically every other major tech company are now competing to commercialize.

In an interview, Stephenson told Vanity Fair that he was just “making shit up.” But the Metaverse isn’t the only element of Snow Crash that has earned him a reputation as a tech Nostradamus. He’s credited with predicting everything from our addiction to portable technology to the digitization of, well, everything, and you can thank him, not James Cameron, for bringing the Hindu concept of “avatar” into the everyday language. Google Earth designer Avi Bar-Zeev has said he was inspired by Stephenson’s ideas, and even tried to get the author to visit his office when he was working on Keyhole, an app suite that later served as a basis for Google’s mapping technology. “He wasn’t interested in visiting Keyhole, or didn’t have time. My best guess is that he was somewhat tired of hearing us engineering geeks rave about Snow Crash as a grand vision for the future. That may have something to do with Snow Crash being a dystopian vision.”

Get unlimited access to Vanity Fair plus, a free tote.

Join Now 

 
 

Dystopian or no, Stephenson’s vision of the future is almost here, and at least one tech company virtual-reality start-up Magic Leap, has snapped up Stephenson in an official capacity—he became its Chief Futurist in 2014. Here, with the benefit of 25 years of hindsight, Stephenson talked to the Hive about the differences between augmented and virtual reality, how to create a convincing Metaverse, and why social media is driving us apart.

 

Vanity Fair: As Silicon Valley competes to build the best Metaverse, do you think consumers will be drawn more toward immersive virtual-reality experiences, like the one Mark Zuckerberg is selling with Facebook’s Oculus headset, or augmented-reality gear, like Apple’s Tim Cook is interested in developing?

 

Neal Stephenson: I think that those two options are more different than a lot of people realize. You look at somebody wearing a VR rig on their head, and somebody wearing an AR rig, anything that’s on the market now, and those two people kind of look the same. But what they’re seeing and experiencing is completely different. If you’re in a VR simulation, every photo that’s hitting your eye, everything you see is a virtual object that’s rendered from scratch by a computer graphics system.

If you’re in an AR application, you are where you are. You’re in your physical environment, you’re seeing everything around you normally, but there’s additional stuff that’s being added. So VR has the ability to take you to a completely different fictional place—the kind of thing that’s described in the Metaverse in Snow Crash. When you go into the Metaverse, you’re on the street, you’re in the Black Sun, and your surroundings disappear. In the book, Hiro lives in a shabby shipping container, but when he goes to the Metaverse, he’s a big deal and has access to super high-end real estate. AR’s a whole different bag.

ADVERTISEMENT

 

Do you see VR and AR as competitors, like VHS and Betamax, or are they distinct technology platforms?

Completely separate and almost unrelated. The purpose of VR is to take you to a completely made-up place, and the purpose of AR is to change your experience of the place that you’re in. That pervades everything in terms of how you think about content, how you tell stories, what it is that you can actually do with these devices.

There’s a discussion in Snow Crash about how realistically human faces need to be rendered in the Metaverse—a phenomenon we now refer to as the “uncanny valley.” In the book, Hiro argues that realism isn’t important, while the lone woman in the group, Juanita, advocates for more recognizable human faces. Do you agree with her and the ultimate premise of the book that recognizable humanity is a vital element of a satisfying VR experience?

I’m doing this thing where I have to try to re-inhabit the Neal of 25 years ago in order to answer that question. I think that particular question is still absolutely important to all of this. Twenty-five years ago, it seemed harder and therefore more pressing than it might seem right now. Looking at the computer graphics of the late 1990s, some really interesting things were happening, but facial animation was still in its pretty early days. We hadn’t yet seen Gollum in The Fellowship of the Ring. But today, it’s just expected that we can do faces and we can do them pretty well. So, for example, when a deceased performer, Peter Cushing, showed up in Rogue One, people were like, “Oh yeah, of course they can do that.” Maybe there’s some criticism about how well it was done, but it’s totally considered to be a solved problem right now. It’s just obvious that you would want that.

There are characters in Snow Crash called Gargoyles, who are perpetually plugged in. You wrote: “Gargoyles are no fun to talk to. They never finish a sentence. They are adrift in a laser-drawn world, scanning retinas in all directions . . . You think they’re talking to you, but they’re actually poring over the credit record of some stranger on the other side of the room, or identifying the make and model of airplanes flying overhead.” Do you anticipate a future in which we’re all plugged in 24/7?

The Gargoyles in Snow Crash are kind of in a different category from the Metaverse users, because they’re using what we would now call an augmented-reality device. But I think we sort of are there already with smartphones. The real question isn’t whether it’s going to happen but how artfully it’s going to be done, and can we make it better than it is now? More social, more elegant, and just more conducive to healthy society, healthy interactions? The practice of going around hunched over a rectangle in your hand is completely normal now, and when I see somebody in a car or walking down the street hundreds of feet away, I can tell that they’re texting just by their posture. We all can. It’s what we’ve turned into because of the particular way that we’re connected through technology. I’d like to think it can get better than what we have now.

 

Class and privilege play a huge role in your Metaverse. If you can afford a certain kind of avatar, your experience is going to be better. Do you see any parallel to the present day?

That was probably better as satirical fiction than it is as a technological prediction, but it was too irresistible not to include the idea that you could discriminate against people based on the resolution of their avatar. On Facebook, we have more subtle ways of classifying people. Do they type in all caps? Who are their friends? If you look at somebody’s Facebook postings, after a few moments, you can glean information about what kind of person they are, how educated they are, what their social status is.

ADVERTISEMENT

 

On the subject of class and privilege, in the novel it’s a group of scrappy young programmers who create and control the Metaverse. How different will it be when, say, a large corporation like Apple, Samsung, or Facebook is in control? What do you think of the prospect of a Zuckerberg-controlled Metaverse?

[Low laughter and a very, very, very long pause] I would say that anything that gets invented has got results, and some of those are predicted and some are not. There’s no fixed process for predicting the results and controlling what happens. At some level, it boils down to people’s capacity to act as socially responsible, ethical individuals.

One of the things that’s been interesting to observe with the rise of social media is the way in which the same technologies that initially seemed to be uniting us have in fact driven us further apart. Do you see virtual reality as ultimately contributing to the same political polarization that we’ve seen divide Twitter and Facebook?

Well, first, I should make full disclosure that I totally did not see that coming. Even a few years ago, to say nothing of 25 years ago, I really didn’t see the whole social-media bubble thing coming and didn’t—even when I became aware of it—didn’t really get its significance until November 8, 2016. So, that one I missed. The way that the Metaverse is designed—keeping in mind that this was pre-Internet as we know it, pre-Worldwide Web, just me making shit up—there’s only one Metaverse. You have to go there, you can’t set up your own.

I’m tempted to say that if it really existed, it would be less vulnerable to some social-bubble formation than what we have now where anybody can create their own Web site or social-media feed. The thing that’s not intuitive that makes social-media bubbles so deceptive is that you don’t see what you’re not seeing. So, it just invisibly, behind the scenes, filters out all the stuff that you’d rather not see, and you’re not aware that filtering is taking place. That’s the thing that causes bubbles. It’s not the filtering, it’s the fact that the filtering happens invisibly.

You mentioned the election. Do you think that the current political climate—and maybe even the literal climate, what with Trump pulling out of the Paris climate accord—is creating increased urgency for an escapist VR experience?

 

If you think of it as an escapist thing, then my knee-jerk answer is that we ought to be doing the opposite. What I’m seeing in the wake of recent political changes is more people stepping back from escapism and trying to get more involved. I would certainly hope that when we enter uncertain and dangerous times, that people’s response to that is not to close their eyes and pretend it’s not happening but to find ways of making things better.

When you see the clear influence your work has had on newer popular-genre stories like last year’s “San Junipero” episode of Black Mirror, or Ernest Cline’s book Ready Player One, what does that feel like?

It’s not a thing that I spend a lot of time thinking about. Circling back to focus on previous works, I think, is just never a healthy thing to do. One of the drawbacks of doing everything electronically now is that back when I shipped paper manuscripts, I would print the thing out. I’d wrap it up and I would go to the FedEx box and I would put the thing in the big slot and I would sort of look at it for a second, and then I would shove it home and I would hear the clunk of this giant manuscript slamming into the bottom of the box. And then it was done. And I would just turn around and walk away and start working on the next book.

Ready Player One, unfortunately, is going to beat Snow Crash to the movie theater. Steven Spielberg’s adaptation is hitting theaters next year, while the Snow Crash movie has been in development for, well, 20 years. Producer Frank Marshall said very recently that Snow Crash could go into production as soon as this year. Do you have any updates you could share?

I was down there a week ago talking to the people who are working on it. It’s definitely a thing that’s being worked on in a serious way, and I like the people who are working on it. Frank and Kathy Kennedy have been associated with the property for, almost since the day it came out. I’ve always felt comfortable having them be the potential producers of it, because I’ve always known that they would treat it respectfully and do a good job of it, and I continue to feel that way. So, yeah, paying attention and [being] interested to see what happens but also trying to leave them alone to do their jobs and not be a pest.

 
 

Possible Dinosaur DNA Has Been Found

New discoveries have raised the possibility of exploring dino genetics, but controversy surrounds the results

Possible Dinosaur DNA Has Been Found
Centrosaurus. Credit: Elena Duvernay Getty Images

The tiny fossil is unassuming, as dinosaur remains go. It is not as big as an Apatosaurus femur or as impressive as a Tyrannosaurus jaw. The object is a just a scant shard of cartilage from the skull of a baby hadrosaur called Hypacrosaurus that perished more than 70 million years ago. But it may contain something never before seen from the depths of the Mesozoic era: degraded remnants of dinosaur DNA.

Genetic material is not supposed to last over such time periods—not by a long shot. DNA begins to decay at death. Findings from a 2012 study on moa bones show an organism’s genetic material deteriorates at such a rate that it halves itself every 521 years. This speed would mean paleontologists can only hope to recover recognizable DNA sequences from creatures that lived and died within the past 6.8 million years—far short of even the last nonavian dinosaurs.

But then there is the Hypacrosaurus cartilage. In a study published earlier this year, Chinese Academy of Sciences paleontologist Alida Bailleul and her colleagues proposed that in that fossil, they had found not only evidence of original proteins and cartilage-creating cells but a chemical signature consistent with DNA.

Recovering genetic material of such antiquity would be a major development. Working on more recently extinct creatures—such as mammoths and giant ground sloths—paleontologists have been able to revise family trees, explore the interrelatedness of species and even gain some insights into biological features such as variations in coloration. DNA from nonavian dinosaurs would add a wealth of new information about the biology of the “terrible lizards.” Such a find would also establish the possibility that genetic material can remain detectable not just for one million years, but for tens of millions. The fossil record would not be bones and footprints alone: it would contain scraps of the genetic record that ties together all life on Earth.

Yet first, paleontologists need to confirm that these possible genetic traces are the real thing. Such potential tatters of ancient DNA are not exactly Jurassic Park–quality. At best, their biological makers seem to be degraded remnants of genes that cannot be read—broken-down components rather than intact parts of a sequence. Still, these potential tatters of ancient DNA would be far older (by millions of years) than the next closest trace of degraded genetic material in the fossil record.

If upheld, Bailleul and her colleagues’ findings would indicate that biochemical traces of organisms can persist for tens of millions of years longer than previously thought. And that would mean there may be an entire world of biological information experts are only just getting to know. “I think exceptional preservation is really more common than what we think, because, as researchers, we have not looked at enough fossils yet,” Bailleul says. “We must keep looking.”

The question is whether these proteins and other traces are really what they seem. Hot on the heels of Bailleul’s paper—and inspired by the controversy over what the biomolecules inside dinosaur bones represent—a separate team, led by Princeton University geoscientist Renxing Liang, recently reported on unexpected microbes found inside one from Centrosaurus, a horned dinosaur of similar age to Hypacrosaurus. The researchers said that they unearthed DNA inside the bone, but it was from lineages of bacteria and other microorganisms that had not been seen before. The bone had its own unique microbiome, which could cause confusion as to whether proteins and possible genetic material belonged to the dinosaur itself or to bacteria that had come to reside within it during the fossilization process.

The discovery that such fossils can harbor bacterial communities different from those in the surrounding stone complicates the search for dinosaur DNA, proteins and other biomolecules. The modern may be overlaid on the past, creating a false image. “Even if any trace organics could be preserved,” Liang says, “the identification processes would be as challenging as finding a needle in the haystack and thus will likely lead to potential false claims.”

“Right now, molecular paleontology is controversial,” Bailleul says. The first sticking point is that when researchers look for traces of ancient biological molecules, they use technologies invented to find intact traces that have been degraded or altered by vast amounts of time. On top of that issue, there remains much experts do not know about how a dinosaur bone changes from organic tissue in a recently alive animal to a fossil hardened by minerals. “We have not figured out all of the complex mechanisms of molecular fossilization using chemistry. And we don’t know enough about the roles that microbes play,” Bailleul says. For example, it is unclear how modern microbes outside of fossils might interact with those that have been living within the bones.

These unknowns, as well as protocols that are still in development, fuel the ongoing debate over what the biological tidbits inside dinosaur bones represent. The research on the Hypacrosaurus cartilage looked at its microscopic details and used chemical stains that bind to DNA. In contrast, the study on the Centrosaurus bone used DNA sequencing to understand the nature of the genetic traces inside it—but did not look at its microstructure.

Bailleul acknowledges that considering previously unknown forms of microorganisms when studying dinosaur bone microbiology is important. But she proposes that it is unlikely bacteria would find their way into a cartilage cell and mimic its nucleus in such a way that researchers would mistake the microorganisms for the genuine article. Yet “you can never be too skeptical of your own results,” says paleogeneticist and author Ross Barnett, who was not involved in the two studies described above.

One of the largest difficulties in the ongoing debate, Barnett says, is a lack of replication. And paleogenetics has been through this problem before: Around the time the film Jurassic Park debuted in 1993, research papers heralded the discovery of Mesozoic DNA. Those claims were later overturned when other research teams could not replicate the same results. Even though the science of paleogenetics has changed since that time, the need for multiple labs to confirm the same result remains important. “If a different lab could be independently sent fossils from the same site, work up their own antibodies, do their own staining and get the same results, it would make things more believable,” Barnett says. Such collaboration has yet to take place for some of the assertions of exceptional dinosaurian preservation.

Nevertheless, molecular paleobiology is developing standards of evidence and protocols as it continues to search for clues held inside ancient bones. “I hope that many paleontologists or biologists, or both, are also trying to do this,” Bailleul says. “We can figure out the answers faster if we are all working on this together.”

Even if proposed dinosaur organics turn out to be false, the effort could still yield unexpected benefits. Bacterial communities are thought to be involved in the preservation of bones and in their replacement with minerals, thus helping dinosaur remains become fossils. “Future studies about ancient DNA from past microbial communities that used to live inside the dinosaur bones could shed more light on the roles of microorganisms in the fossilization and preservation of bones through geological time,” Liang says.

“These are very difficult questions,” Bailleul says. “But if we keep trying, there is hope that we will figure out most answers.” As the situation stands now, nothing is written in stone.

Rights & Permissions

Wolfram has a peek into the theory of everything

Finally We May Have a Path to the Fundamental Theory of Physics…
and It’s Beautiful

Visual summary of the Wolfram Physics Project

I Never Expected This

It’s unexpected, surprising—and for me incredibly exciting. To be fair, at some level I’ve been working towards this for nearly 50 years. But it’s just in the last few months that it’s finally come together. And it’s much more wonderful, and beautiful, than I’d ever imagined.

In many ways it’s the ultimate question in natural science: How does our universe work? Is there a fundamental theory? An incredible amount has been figured out about physics over the past few hundred years. But even with everything that’s been done—and it’s very impressive—we still, after all this time, don’t have a truly fundamental theory of physics.

Back when I used do theoretical physics for a living, I must admit I didn’t think much about trying to find a fundamental theory; I was more concerned about what we could figure out based on the theories we had. And somehow I think I imagined that if there was a fundamental theory, it would inevitably be very complicated.

But in the early 1980s, when I started studying the computational universe of simple programs I made what was for me a very surprising and important discovery: that even when the underlying rules for a system are extremely simple, the behavior of the system as a whole can be essentially arbitrarily rich and complex.

And this got me thinking: Could the universe work this way? Could it in fact be that underneath all of this richness and complexity we see in physics there are just simple rules? I soon realized that if that was going to be the case, we’d in effect have to go underneath space and time and basically everything we know. Our rules would have to operate at some lower level, and all of physics would just have to emerge.

By the early 1990s I had a definite idea about how the rules might work, and by the end of the 1990s I had figured out quite a bit about their implications for spacetimegravity and other things in physics—and, basically as an example of what one might be able to do with science based on studying the computational universe, I devoted nearly 100 pages to this in my book A New Kind of Science.

I always wanted to mount a big project to take my ideas further. I tried to start around 2004. But pretty soon I got swept up in building Wolfram|Alpha, and the Wolfram Language and everything around it. From time to time I would see physicist friends of mine, and I’d talk about my physics project. There’d be polite interest, but basically the feeling was that finding a fundamental theory of physics was just too hard, and only kooks would attempt it.

It didn’t help that there was something that bothered me about my ideas. The particular way I’d set up my rules seemed a little too inflexible, too contrived. In my life as a computational language designer I was constantly thinking about abstract systems of rules. And every so often I’d wonder if they might be relevant for physics. But I never got anywhere. Until, suddenly, in the fall of 2018, I had a little idea.

It was in some ways simple and obvious, if very abstract. But what was most important about it to me was that it was so elegant and minimal. Finally I had something that felt right to me as a serious possibility for how physics might work. But wonderful things were happening with the Wolfram Language, and I was busy thinking about all the implications of finally having a full-scale computational language.

But then, at our annual Summer School in 2019, there were two young physicists (Jonathan Gorard and Max Piskunov) who were like, “You just have to pursue this!” Physics had been my great passion when I was young, and in August 2019 I had a big birthday and realized that, yes, after all these years I really should see if I can make something work.

So—along with the two young physicists who’d encouraged me—I began in earnest in October 2019. It helped that—after a lifetime of developing them—we now had great computational tools. And it wasn’t long before we started finding what I might call “very interesting things”. We reproduced, more elegantly, what I had done in the 1990s. And from tiny, structureless rules out were coming space, time, relativity, gravity and hints of quantum mechanics.

We were doing zillions of computer experiments, building intuition. And gradually things were becoming clearer. We started understanding how quantum mechanics works. Then we realized what energy is. We found an outline derivation of my late friend and mentor Richard Feynman’s path integral. We started seeing some deep structural connections between relativity and quantum mechanics. Everything just started falling into place. All those things I’d known about in physics for nearly 50 years—and finally we had a way to see not just what was true, but why.

I hadn’t ever imagined anything like this would happen. I expected that we’d start exploring simple rules and gradually, if we were lucky, we’d get hints here or there about connections to physics. I thought maybe we’d be able to have a possible model for the first  seconds of the universe, but we’d spend years trying to see whether it might actually connect to the physics we see today.

In the end, if we’re going to have a complete fundamental theory of physics, we’re going to have to find the specific rule for our universe. And I don’t know how hard that’s going to be. I don’t know if it’s going to take a month, a year, a decade or a century. A few months ago I would also have said that I don’t even know if we’ve got the right framework for finding it.

But I wouldn’t say that anymore. Too much has worked. Too many things have fallen into place. We don’t know if the precise details of how our rules are set up are correct, or how simple or not the final rules may be. But at this point I am certain that the basic framework we have is telling us fundamentally how physics works.

It’s always a test for scientific models to compare how much you put in with how much you get out. And I’ve never seen anything that comes close. What we put in is about as tiny as it could be. But what we’re getting out are huge chunks of the most sophisticated things that are known about physics. And what’s most amazing to me is that at least so far we’ve not run across a single thing where we’ve had to say “oh, to explain that we have to add something to our model”. Sometimes it’s not easy to see how things work, but so far it’s always just been a question of understanding what the model already says, not adding something new.

At the lowest level, the rules we’ve got are about as minimal as anything could be. (Amusingly, their basic structure can be expressed in a fraction of a line of symbolic Wolfram Language code.) And in their raw form, they don’t really engage with all the rich ideas and structure that exist, for example, in mathematics. But as soon as we start looking at the consequences of the rules when they’re applied zillions of times, it becomes clear that they’re very elegantly connected to a lot of wonderful recent mathematics.

There’s something similar with physics, too. The basic structure of our models seems alien and bizarrely different from almost everything that’s been done in physics for at least the past century or so. But as we’ve gotten further in investigating our models something amazing has happened: we’ve found that not just one, but many of the popular theoretical frameworks that have been pursued in physics in the past few decades are actually directly relevant to our models.

I was worried this was going to be one of those “you’ve got to throw out the old” advances in science. It’s not. Yes, the underlying structure of our models is different. Yes, the initial approach and methods are different. And, yes, a bunch of new ideas are needed. But to make everything work we’re going to have to build on a lot of what my physicist friends have been working so hard on for the past few decades.

And then there’ll be the physics experiments. If you’d asked me even a couple of months ago when we’d get anything experimentally testable from our models I would have said it was far away. And that it probably wouldn’t happen until we’d pretty much found the final rule. But it looks like I was wrong. And in fact we’ve already got some good hints of bizarre new things that might be out there to look for.

OK, so what do we need to do now? I’m thrilled to say that I think we’ve found a path to the fundamental theory of physics. We’ve built a paradigm and a framework (and, yes, we’ve built lots of good, practical, computational tools too). But now we need to finish the job. We need to work through a lot of complicated computation, mathematics and physics. And see if we can finally deliver the answer to how our universe fundamentally works.

It’s an exciting moment, and I want to share it. I’m looking forward to being deeply involved. But this isn’t just a project for me or our small team. This is a project for the world. It’s going to be a great achievement when it’s done. And I’d like to see it shared as widely as possible. Yes, a lot of what has to be done requires top-of-the-line physics and math knowledge. But I want to expose everything as broadly as possible, so everyone can be involved in—and I hope inspired by—what I think is going to be a great and historic intellectual adventure.

Today we’re officially launching our Physics Project. From here on, we’ll be livestreaming what we’re doing—sharing whatever we discover in real time with the world. (We’ll also soon be releasing more than 400 hours of video that we’ve already accumulated.) I’m posting all my working materials going back to the 1990s, and we’re releasing all our software tools. We’ll be putting out bulletins about progress, and there’ll be educational programs around the project.

Oh, yes, and we’re putting up a Registry of Notable Universes. It’s already populated with nearly a thousand rules. I don’t think any of the ones in there yet are our own universe—though I’m not completely sure. But sometime—I hope soon—there might just be a rule entered in the Registry that has all the right properties, and that we’ll slowly discover that, yes, this is it—our universe finally decoded.

The Wolfram Physics Project

How It Works

OK, so how does it all work? I’ve written a 448-page technical exposition (yes, I’ve been busy the past few months!). Another member of our team (Jonathan Gorard) has written two 60-page technical papers. And there’s other material available at the project website. But here I’m going to give a fairly non-technical summary of some of the high points.

It all begins with something very simple and very structureless. We can think of it as a collection of abstract relations between abstract elements. Or we can think of it as a hypergraph—or, in simple cases, a graph.

We might have a collection of relations like

{{1, 2}, {2, 3}, {3, 4}, {2, 4}}

that can be represented by a graph like

ResourceFunction
&#10005

 

ResourceFunction[
  "WolframModelPlot"][{{1, 2}, {2, 3}, {3, 4}, {2, 4}}, 
 VertexLabels -> Automatic]

 

All we’re specifying here are the relations between elements (like {2,3}). The order in which we state the relations doesn’t matter (although the order within each relation does matter). And when we draw the graph, all that matters is what’s connected to what; the actual layout on the page is just a choice made for visual presentation. It also doesn’t matter what the elements are called. Here I’ve used numbers, but all that matters is that the elements are distinct.

OK, so what do we do with these collections of relations, or graphs? We just apply a simple rule to them, over and over again. Here’s an example of a possible rule:

{{xy}, {xz}}  {{xz}, {xw}, {yw}, {zw}}

What this rule says is to pick up two relations—from anywhere in the collection—and see if the elements in them match the pattern {{x,y},{x,z}} (or, in the Wolfram Language{{x_,y_},{x_,z_}}), where the two x’s can be anything, but both have to be the same, and the y and z can be anything. If there’s a match, then replace these two relations with the four relations on the right. The w that appears there is a new element that’s being created, and the only requirement is that it’s distinct from all other elements.

We can represent the rule as a transformation of graphs:

RulePlot
&#10005

 

RulePlot[ResourceFunction[
   "WolframModel"][{{x, y}, {x, z}} -> {{x, z}, {x, w}, {y, w}, {z, 
     w}}], VertexLabels -> Automatic, "RulePartsAspectRatio" -> 0.5]

 

Now let’s apply the rule once to:

{{1, 2}, {2, 3}, {3, 4}, {2, 4}}

The {2,3} and {2,4} relations get matched, and the rule replaces them with four new relations, so the result is:

{{1, 2}, {3, 4}, {2, 4}, {2, 5}, {3, 5}, {4, 5}}

We can represent this result as a graph (which happens to be rendered flipped relative to the graph above):

ResourceFunction
&#10005

 

ResourceFunction[
   "WolframModel"][{{x, y}, {x, z}} -> {{x, z}, {x, w}, {y, w}, {z, 
     w}}, {{1, 2}, {2, 3}, {3, 4}, {2, 4}}, 1]["FinalStatePlot", 
 VertexLabels -> Automatic]

 

OK, so what happens if we just keep applying the rule over and over? Here’s the result:

ResourceFunction
&#10005

 

ResourceFunction[
  "WolframModel"][{{x, y}, {x, z}} -> {{x, z}, {x, w}, {y, w}, {z, 
    w}}, {{1, 2}, {2, 3}, {3, 4}, {2, 4}}, 10, "StatesPlotsList"]

 

Let’s do it a few more times, and make a bigger picture:

ResourceFunction
&#10005

 

ResourceFunction[
  "WolframModel"][{{x, y}, {x, z}} -> {{x, z}, {x, w}, {y, w}, {z, 
    w}}, {{1, 2}, {2, 3}, {3, 4}, {2, 4}}, 14, "FinalStatePlot"]

 

What happened here? We have such a simple rule. Yet applying this rule over and over again produces something that looks really complicated. It’s not what our ordinary intuition tells us should happen. But actually—as I first discovered in the early 1980s—this kind of intrinsic, spontaneous generation of complexity turns out to be completely ubiquitous among simple rules and simple programs. And for example my book A New Kind of Science is about this whole phenomenon and why it’s so important for science and beyond.

But here what’s important about it is that it’s what’s going to make our universe, and everything in it. Let’s review again what we’ve seen. We started off with a simple rule that just tells us how to transform collections of relations. But what we get out is this complicated-looking object that, among other things, seems to have some definite shape.

We didn’t put in anything about this shape. We just gave a simple rule. And using that simple rule a graph was made. And when we visualize that graph, it comes out looking like it has a definite shape.

If we ignore all matter in the universe, our universe is basically a big chunk of space. But what is that space? We’ve had mathematical idealizations and abstractions of it for two thousand years. But what really is it? Is it made of something, and if so, what?

Well, I think it’s very much like the picture above. A whole bunch of what are essentially abstract points, abstractly connected together. Except that in the picture there are 6704 of these points, whereas in our real universe there might be more like 10400 of them, or even many more.

All Possible Rules

We don’t (yet) know an actual rule that represents our universe—and it’s almost certainly not the one we just talked about. So let’s discuss what possible rules there are, and what they typically do.

One feature of the rule we used above is that it’s based on collections of “binary relations”, containing pairs of elements (like {2,3}). But the same setup lets us also consider relations with more elements. For example, here’s a collection of two ternary relations:

{{1, 2, 3}, {3, 4, 5}}

We can’t use an ordinary graph to represent things like this, but we can use a hypergraph—a construct where we generalize edges in graphs that connect pairs of nodes to “hyperedges” that connect any number of nodes:

ResourceFunction
&#10005

 

ResourceFunction["WolframModelPlot"][{{1, 2, 3}, {3, 4, 5}}, 
 VertexLabels -> Automatic]

 

(Notice that we’re dealing with directed hypergraphs, where the order in which nodes appear in a hyperedge matters. In the picture, the “membranes” are just indicating which nodes are connected to the same hyperedge.)

We can make rules for hypergraphs too:

{{xyz}}  {{wwy}, {wxz}}

RulePlot
&#10005

 

RulePlot[ResourceFunction[
   "WolframModel"][{{1, 2, 3}} -> {{4, 4, 2}, {4, 1, 3}}]]

 

And now here’s what happens if we run this rule starting from the simplest possible ternary hypergraph—the ternary self-loop {{0,0,0}}:

ResourceFunction
&#10005

 

ResourceFunction[
   "WolframModel"][{{1, 2, 3}} -> {{4, 4, 2}, {4, 1, 3}}, {{0, 0, 0}},
   8]["StatesPlotsList", "MaxImageSize" -> 180]

 

Alright, so what happens if we just start picking simple rules at random? Here are some of the things they do:

urules24
&#10005

 

urules24 = 
 Import["https://www.wolframcloud.com/obj/wolframphysics/Data/22-24-\
2x0-unioned-summary.wxf"]; SeedRandom[6783]; GraphicsGrid[
 Partition[
  ResourceFunction["WolframModelPlot"][List @@@ EdgeList[#]] & /@ 
   Take[Select[
     ParallelMap[
      UndirectedGraph[
        Rule @@@ 
         ResourceFunction["WolframModel"][#, {{0, 0}, {0, 0}}, 8, 
          "FinalState"], 
        GraphLayout -> "SpringElectricalEmbedding"] &, #Rule & /@ 
       RandomSample[urules24, 150]], 
     EdgeCount[#] > 10 && ConnectedGraphQ[#] &], 60], 10], 
 ImageSize -> Full]

 

Somehow this looks very zoological (and, yes, these models are definitely relevant for things other than fundamental physics—though probably particularly molecular-scale construction). But basically what we see here is that there are various common forms of behavior, some simple, and some not.

Here are some samples of the kinds of things we see:

GraphicsGrid
&#10005

 

GraphicsGrid[
 Partition[
  ParallelMap[
   ResourceFunction["WolframModel"][#[[1]], #[[2]], #[[3]], 
     "FinalStatePlot"] &, {{{{1, 2}, {1, 3}} -> {{1, 2}, {1, 4}, {2, 
        4}, {4, 3}}, {{0, 0}, {0, 0}}, 
     12}, {{{1, 2}, {1, 3}} -> {{1, 4}, {1, 4}, {2, 4}, {3, 2}}, {{0, 
       0}, {0, 0}}, 
     10}, {{{1, 2}, {1, 3}} -> {{2, 2}, {2, 4}, {1, 4}, {3, 4}}, {{0, 
       0}, {0, 0}}, 
     10}, {{{1, 2}, {1, 3}} -> {{2, 3}, {2, 4}, {3, 4}, {1, 4}}, {{0, 
       0}, {0, 0}}, 
     10}, {{{1, 2}, {1, 3}} -> {{2, 3}, {2, 4}, {3, 4}, {4, 1}}, {{0, 
       0}, {0, 0}}, 
     12}, {{{1, 2}, {1, 3}} -> {{2, 4}, {2, 1}, {4, 1}, {4, 3}}, {{0, 
       0}, {0, 0}}, 
     9}, {{{1, 2}, {1, 3}} -> {{2, 4}, {2, 4}, {1, 4}, {3, 4}}, {{0, 
       0}, {0, 0}}, 
     10}, {{{1, 2}, {1, 3}} -> {{2, 4}, {2, 4}, {2, 1}, {3, 4}}, {{0, 
       0}, {0, 0}}, 
     10}, {{{1, 2}, {1, 3}} -> {{4, 1}, {1, 4}, {4, 2}, {4, 3}}, {{0, 
       0}, {0, 0}}, 
     12}, {{{1, 2}, {2, 3}} -> {{1, 2}, {2, 1}, {4, 1}, {4, 3}}, {{0, 
       0}, {0, 0}}, 
     10}, {{{1, 2}, {2, 3}} -> {{1, 3}, {1, 4}, {3, 4}, {3, 2}}, {{0, 
       0}, {0, 0}}, 
     10}, {{{1, 2}, {2, 3}} -> {{2, 3}, {2, 4}, {3, 4}, {1, 2}}, {{0, 
       0}, {0, 0}}, 9}}], 4], ImageSize -> Full]

 

And the big question is: if we were to run rules like these long enough, would they end up making something that reproduces our physical universe? Or, put another way, out in this computational universe of simple rules, can we find our physical universe?

A big question, though, is: How would we know? What we’re seeing here are the results of applying rules a few thousand times; in our actual universe they may have been applied 10500 times so far, or even more. And it’s not easy to bridge that gap. And we have to work it from both sides. First, we have to use the best summary of the operation of our universe that what we’ve learned in physics over the past few centuries has given us. And second, we have to go as far as we can in figuring out what our rules actually do.

And here there’s potentially a fundamental problem: the phenomenon of computational irreducibility. One of the great achievements of the mathematical sciences, starting about three centuries ago, has been delivering equations and formulas that basically tell you how a system will behave without you having to trace each step in what the system does. But many years ago I realized that in the computational universe of possible rules, this very often isn’t possible. Instead, even if you know the exact rule that a system follows, you may still not be able to work out what the system will do except by essentially just tracing every step it takes.

One might imagine that—once we know the rule for some system—then with all our computers and brainpower we’d always be able to “jump ahead” and work out what the system would do. But actually there’s something I call the Principle of Computational Equivalence, which says that almost any time the behavior of a system isn’t obviously simple, it’s computationally as sophisticated as anything. So we won’t be able to “outcompute” it—and to work out what it does will take an irreducible amount of computational work.

Well, for our models of the universe this is potentially a big problem. Because we won’t be able to get even close to running those models for as long as the universe does. And at the outset it’s not clear that we’ll be able to tell enough from what we can do to see if it matches up with physics.

But the big recent surprise for me is that we seem to be lucking out. We do know that whenever there’s computational irreducibility in a system, there are also an infinite number of pockets of computational reducibility. But it’s completely unclear whether in our case those pockets will line up with things we know from physics. And the surprise is that it seems a bunch of them do.

What Is Space?

Let’s look at a particular, simple rule from our infinite collection:

{{xyy}, {zxu}}  {{yvy}, {yzv}, {uvv}}

RulePlot
&#10005

 

RulePlot[ResourceFunction[
   "WolframModel"][{{1, 2, 2}, {3, 1, 4}} -> {{2, 5, 2}, {2, 3, 
     5}, {4, 5, 5}}]]

 

Here’s what it does:

ResourceFunction
&#10005

 

ResourceFunction["WolframModelPlot"][#, ImageSize -> 50] & /@ 
 ResourceFunction[
   "WolframModel"][{{{1, 2, 2}, {3, 1, 4}} -> {{2, 5, 2}, {2, 3, 
      5}, {4, 5, 5}}}, {{0, 0, 0}, {0, 0, 0}}, 20, "StatesList"]

 

And after a while this is what happens:

Row
&#10005

 

Row[Append[
  Riffle[ResourceFunction[
       "WolframModel"][{{1, 2, 2}, {3, 1, 4}} -> {{2, 5, 2}, {2, 3, 
         5}, {4, 5, 5}}, {{0, 0, 0}, {0, 0, 0}}, #, 
      "FinalStatePlot"] & /@ {200, 500}, " ... "], " ..."]]

 

It’s basically making us a very simple “piece of space”. If we keep on going longer and longer it’ll make a finer and finer mesh, to the point where what we have is almost indistinguishable from a piece of a continuous plane.

Here’s a different rule:

{{xxy}, {zux}}  {{uuz}, {vuv}, {vyx}}

RulePlot
&#10005

 

RulePlot[ResourceFunction[
   "WolframModel"][{{x, x, y}, {z, u, x}} -> {{u, u, z}, {v, u, 
     v}, {v, y, x}}]]

 

ResourceFunction
&#10005

 

ResourceFunction["WolframModelPlot"][#, ImageSize -> 50] & /@ 
 ResourceFunction[
   "WolframModel"][{{1, 1, 2}, {3, 4, 1}} -> {{4, 4, 3}, {5, 4, 
     5}, {5, 2, 1}}, {{0, 0, 0}, {0, 0, 0}}, 20, "StatesList"]

 

ResourceFunction
&#10005

 

ResourceFunction[
  "WolframModel"][{{1, 1, 2}, {3, 4, 1}} -> {{4, 4, 3}, {5, 4, 5}, {5,
     2, 1}}, {{0, 0, 0}, {0, 0, 0}}, 2000, "FinalStatePlot"]

 

It looks it’s “trying to make” something 3D. Here’s another rule:

{{xyz}, {uyv}}  {{wzx}, {zwu}, {xyw}}

RulePlot
&#10005

 

RulePlot[ResourceFunction[
   "WolframModel"][{{1, 2, 3}, {4, 2, 5}} -> {{6, 3, 1}, {3, 6, 
     4}, {1, 2, 6}}]]

 

ResourceFunction
&#10005

 

ResourceFunction["WolframModelPlot"][#, ImageSize -> 50] & /@ 
 ResourceFunction[
   "WolframModel"][{{x, y, z}, {u, y, v}} -> {{w, z, x}, {z, w, 
     u}, {x, y, w}}, {{0, 0, 0}, {0, 0, 0}}, 20, "StatesList"]

 

ResourceFunction
&#10005

 

ResourceFunction[
  "WolframModel"][{{1, 2, 3}, {4, 2, 5}} -> {{6, 3, 1}, {3, 6, 4}, {1,
     2, 6}}, {{0, 0, 0}, {0, 0, 0}}, 1000, "FinalStatePlot"]

 

Isn’t this strange? We have a rule that’s just specifying how to rewrite pieces of an abstract hypergraph, with no notion of geometry, or anything about 3D space. And yet it produces a hypergraph that’s naturally laid out as something that looks like a 3D surface.

Even though the only thing that’s really here is connections between points, we can “guess” where a surface might be, then we can show the result in 3D:

ResourceFunction
&#10005

 

ResourceFunction["GraphReconstructedSurface"][
 ResourceFunction[
   "WolframModel"][ {{1, 2, 3}, {4, 2, 5}} -> {{6, 3, 1}, {3, 6, 
     4}, {1, 2, 6}}, {{0, 0, 0}, {0, 0, 0}}, 2000, "FinalState"]]

 

If we keep going, then like the example of the plane, the mesh will get finer and finer, until basically our rule has grown us—point by point, connection by connection—something that’s like a continuous 3D surface of the kind you might study in a calculus class. Of course, in some sense, it’s not “really” that surface: it’s just a hypergraph that represents a bunch of abstract relations—but somehow the pattern of those relations gives it a structure that’s a closer and closer approximation to the surface.

And this is basically how I think space in the universe works. Underneath, it’s a bunch of discrete, abstract relations between abstract points. But at the scale we’re experiencing it, the pattern of relations it has makes it seem like continuous space of the kind we’re used to. It’s a bit like what happens with, say, water. Underneath, it’s a bunch of discrete molecules bouncing around. But to us it seems like a continuous fluid.

Needless to say, people have thought that space might ultimately be discrete ever since antiquity. But in modern physics there was never a way to make it work—and anyway it was much more convenient for it to be continuous, so one could use calculus. But now it’s looking like the idea of space being discrete is actually crucial to getting a fundamental theory of physics.

The Dimensionality of Space

A very fundamental fact about space as we experience it is that it is three-dimensional. So can our rules reproduce that? Two of the rules we just saw produce what we can easily recognize as two-dimensional surfaces—in one case flat, in the other case arranged in a certain shape. Of course, these are very bland examples of (two-dimensional) space: they are effectively just simple grids. And while this is what makes them easy to recognize, it also means that they’re not actually much like our universe, where there’s in a sense much more going on.

So, OK, take a case like:

ResourceFunction
&#10005

 

ResourceFunction[
  "WolframModel"][{{1, 2, 3}, {4, 3, 5}} -> {{3, 5, 2}, {5, 2, 4}, {2,
     1, 6}}, {{0, 0, 0}, {0, 0, 0}}, 22, "FinalStatePlot"]

 

If we were to go on long enough, would this make something like space, and, if so, with how many dimensions? To know the answer, we have to have some robust way to measure dimension. But remember, the pictures we’re drawing are just visualizations; the underlying structure is a bunch of discrete relations defining a hypergraph—with no information about coordinates, or geometry, or even topology. And, by the way, to emphasize that point, here is the same graph—with exactly the same connectivity structure—rendered four different ways:

GridGraph
&#10005

 

GridGraph[{10, 10}, GraphLayout -> #, 
   VertexStyle -> 
    ResourceFunction["WolframPhysicsProjectStyleData"]["SpatialGraph",
      "VertexStyle"], 
   EdgeStyle -> 
    ResourceFunction["WolframPhysicsProjectStyleData"]["SpatialGraph",
      "EdgeLineStyle"] ] & /@ {"SpringElectricalEmbedding", 
  "TutteEmbedding", "RadialEmbedding", "DiscreteSpiralEmbedding"}

 

But getting back to the question of dimension, recall that the area of a circle is πr2; the volume of a sphere is . In general, the “volume” of the -dimensional analog of a sphere is a constant multiplied by rd. But now think about our hypergraph. Start at some point in the hypergraph. Then follow r hyperedges in all possible ways. You’ve effectively made the analog of a “spherical ball” in the hypergraph. Here are examples for graphs corresponding to 2D and 3D lattices:

MakeBallPicture
&#10005

 

MakeBallPicture[g_, rmax_] := 
  Module[{gg = UndirectedGraph[g], cg}, cg = GraphCenter[gg]; 
   Table[HighlightGraph[gg, NeighborhoodGraph[gg, cg, r]], {r, 0, 
     rmax}]];
Graph[#, ImageSize -> 60, 
   VertexStyle -> 
    ResourceFunction["WolframPhysicsProjectStyleData"]["SpatialGraph",
      "VertexStyle"], 
   EdgeStyle -> 
    ResourceFunction["WolframPhysicsProjectStyleData"]["SpatialGraph",
      "EdgeLineStyle"] ] & /@ MakeBallPicture[GridGraph[{11, 11}], 7]

 

MakeBallPicture
&#10005

 

MakeBallPicture[g_, rmax_] := 
 Module[{gg = UndirectedGraph[g], cg}, cg = GraphCenter[gg]; 
  Table[HighlightGraph[gg, NeighborhoodGraph[gg, cg, r]], {r, 0, 
    rmax}]]; 
Graph[#, ImageSize -> 80, 
   VertexStyle -> 
    ResourceFunction["WolframPhysicsProjectStyleData"]["SpatialGraph",
      "VertexStyle"], 
   EdgeStyle -> 
    ResourceFunction["WolframPhysicsProjectStyleData"]["SpatialGraph",
      "EdgeLineStyle"] ] & /@ MakeBallPicture[GridGraph[{7, 7, 7}], 5]

 

And if you now count the number of points reached by going “graph distance r” (i.e. by following r connections in the graph) you’ll find in these two cases that they indeed grow like r2 and r3.

So this gives us a way to measure the effective dimension of our hypergraphs. Just start at a particular point and see how many points you reach by going r steps:

gg = UndirectedGraph
&#10005

 

gg = UndirectedGraph[
   ResourceFunction["HypergraphToGraph"][
    ResourceFunction[
      "WolframModel"][{{x, y}, {x, z}} -> {{x, z}, {x, w}, {y, 
        w}, {z, w}}, {{1, 2}, {1, 3}}, 11, "FinalState"]]];
With[{cg = GraphCenter[gg]}, 
 Table[HighlightGraph[gg, NeighborhoodGraph[gg, cg, r], 
   ImageSize -> 90], {r, 6}]]

 

Now to work out effective dimension, we in principle just have to fit the results to rd. It’s a bit complicated, though, because we need to avoid small r (where every detail of the hypergraph is going to matter) and large r (where we’re hitting the edge of the hypergraph)—and we also need to think about how our “space” is refining as the underlying system evolves. But in the end we can generate a series of fits for the effective dimension—and in this case these say that the effective dimension is about 2.7:

HypergraphDimensionEstimateList
&#10005

 

HypergraphDimensionEstimateList[hg_] := 
  ResourceFunction["LogDifferences"][
   MeanAround /@ 
    Transpose[
     Values[ResourceFunction["HypergraphNeighborhoodVolumes"][hg, All,
        Automatic]]]];
ListLinePlot[
 Select[Length[#] > 3 &][
  HypergraphDimensionEstimateList /@ 
   Drop[ResourceFunction[
      "WolframModel"][{{x, y}, {x, z}} -> {{x, z}, {x, w}, {y, w}, {z,
         w}}, {{1, 2}, {1, 3}}, 16, "StatesList"], 4]], Frame -> True,
  PlotStyle -> {Hue[0.9849884156577183, 0.844661839156126, 0.63801], 
   Hue[0.05, 0.9493847125498949, 0.954757], Hue[
   0.0889039442504032, 0.7504362741954692, 0.873304], Hue[
   0.06, 1., 0.8], Hue[0.12, 1., 0.9], Hue[0.08, 1., 1.], Hue[
   0.98654716551403, 0.6728487861309527, 0.733028], Hue[
   0.04, 0.68, 0.9400000000000001], Hue[
   0.9945149844324427, 0.9892162267509705, 0.823529], Hue[
   0.9908289627180552, 0.4, 0.9]}]

 

If we do the same thing for

ResourceFunction
&#10005

 

ResourceFunction[
  "WolframModel"][{{1, 2, 2}, {3, 1, 4}} -> {{2, 5, 2}, {2, 3, 
    5}, {4, 5, 5}}, {{0, 0, 0}, {0, 0, 0}}, 200, "FinalStatePlot"]

 

it’s limiting to dimension 2, as it should:

CenteredDimensionEstimateList
&#10005

 

CenteredDimensionEstimateList[g_Graph] := 
  ResourceFunction["LogDifferences"][
   N[First[Values[
      ResourceFunction["GraphNeighborhoodVolumes"][g, 
       GraphCenter[g]]]]]];
Show[ListLinePlot[
  Table[CenteredDimensionEstimateList[
    UndirectedGraph[
     ResourceFunction["HypergraphToGraph"][
      ResourceFunction[
        "WolframModel"][{{1, 2, 2}, {3, 1, 4}} -> {{2, 5, 2}, {2, 3, 
          5}, {4, 5, 5}}, {{0, 0, 0}, {0, 0, 0}}, t, 
       "FinalState"]]]], {t, 500, 2500, 500}], Frame -> True, 
  PlotStyle -> {Hue[0.9849884156577183, 0.844661839156126, 0.63801], 
    Hue[0.05, 0.9493847125498949, 0.954757], Hue[
    0.0889039442504032, 0.7504362741954692, 0.873304], Hue[
    0.06, 1., 0.8], Hue[0.12, 1., 0.9], Hue[0.08, 1., 1.], Hue[
    0.98654716551403, 0.6728487861309527, 0.733028], Hue[
    0.04, 0.68, 0.9400000000000001], Hue[
    0.9945149844324427, 0.9892162267509705, 0.823529], Hue[
    0.9908289627180552, 0.4, 0.9]}], 
 Plot[2, {r, 0, 50}, PlotStyle -> Dotted]]

 

What does the fractional dimension mean? Well, consider fractals, which our rules can easily make:

{{xyz}}  {{xuw}, {yvu}, {zwv}}

RulePlot
&#10005

 

RulePlot[ResourceFunction[
   "WolframModel"][{{1, 2, 3}} -> {{1, 4, 6}, {2, 5, 4}, {3, 6, 5}}]]

 

ResourceFunction
&#10005

 

ResourceFunction["WolframModelPlot"][#, "MaxImageSize" -> 100] & /@ 
 ResourceFunction[
   "WolframModel"][{{1, 2, 3}} -> {{1, 4, 6}, {2, 5, 4}, {3, 6, 
     5}}, {{0, 0, 0}}, 6, "StatesList"]

 

If we measure the dimension here we get 1.58—the usual fractal dimension  for a Sierpiński structure:

HypergraphDimensionEstimateList
&#10005

 

HypergraphDimensionEstimateList[hg_] := 
 ResourceFunction["LogDifferences"][
  MeanAround /@ 
   Transpose[
    Values[ResourceFunction["HypergraphNeighborhoodVolumes"][hg, All, 
      Automatic]]]]; Show[
 ListLinePlot[
  Drop[HypergraphDimensionEstimateList /@ 
    ResourceFunction[
      "WolframModel"][{{1, 2, 3}} -> {{1, 4, 6}, {2, 5, 4}, {3, 6, 
        5}}, {{0, 0, 0}}, 8, "StatesList"], 2], 
  PlotStyle -> {Hue[0.9849884156577183, 0.844661839156126, 0.63801], 
    Hue[0.05, 0.9493847125498949, 0.954757], Hue[
    0.0889039442504032, 0.7504362741954692, 0.873304], Hue[
    0.06, 1., 0.8], Hue[0.12, 1., 0.9], Hue[0.08, 1., 1.], Hue[
    0.98654716551403, 0.6728487861309527, 0.733028], Hue[
    0.04, 0.68, 0.9400000000000001], Hue[
    0.9945149844324427, 0.9892162267509705, 0.823529], Hue[
    0.9908289627180552, 0.4, 0.9]}, Frame -> True, 
  PlotRange -> {0, Automatic}], 
 Plot[Log[2, 3], {r, 0, 150}, PlotStyle -> {Dotted}]]

 

Our rule above doesn’t create a structure that’s as regular as this. In fact, even though the rule itself is completely deterministic, the structure it makes looks quite random. But what our measurements suggest is that when we keep running the rule it produces something that’s like 2.7-dimensional space.

Of course, 2.7 is not 3, and presumably this particular rule isn’t the one for our particular universe (though it’s not clear what effective dimension it’d have if we ran it 10100 steps). But the process of measuring dimension shows an example of how we can start making “physics-connectable” statements about the behavior of our rules.

By the way, we’ve been talking about “making space” with our models. But actually, we’re not just trying to make space; we’re trying to make everything in the universe. In standard current physics, there’s space—described mathematically as a manifold—and serving as a kind of backdrop, and then there’s everything that’s in space, all the matter and particles and planets and so on.

But in our models there’s in a sense nothing but space—and in a sense everything in the universe must be “made of space”. Or, put another way, it’s the exact same hypergraph that’s giving us the structure of space, and everything that exists in space.

So what this means is that, for example, a particle like an electron or a photon must correspond to some local feature of the hypergraph, a bit like in this toy example:

Graph
&#10005

 

Graph[EdgeAdd[
  EdgeDelete[
   NeighborhoodGraph[
    IndexGraph@ResourceFunction["HexagonalGridGraph"][{6, 5}], {42, 
     48, 54, 53, 47, 41}, 4], {30 <-> 29, 42 <-> 41}], {30 <-> 41, 
   42 <-> 29}], 
 VertexSize -> {Small, 
   Alternatives @@ {30, 36, 42, 41, 35, 29} -> Large}, 
 EdgeStyle -> {ResourceFunction["WolframPhysicsProjectStyleData"][
    "SpatialGraph", "EdgeLineStyle"], 
   Alternatives @@ {30 \[UndirectedEdge] 24, 24 \[UndirectedEdge] 18, 
      18 \[UndirectedEdge] 17, 17 \[UndirectedEdge] 23, 
      23 \[UndirectedEdge] 29, 29 \[UndirectedEdge] 35, 
      35 \[UndirectedEdge] 34, 34 \[UndirectedEdge] 40, 
      40 \[UndirectedEdge] 46, 46 \[UndirectedEdge] 52, 
      52 \[UndirectedEdge] 58, 58 \[UndirectedEdge] 59, 
      59 \[UndirectedEdge] 65, 65 \[UndirectedEdge] 66, 
      66 \[UndirectedEdge] 60, 60 \[UndirectedEdge] 61, 
      61 \[UndirectedEdge] 55, 55 \[UndirectedEdge] 49, 
      49 \[UndirectedEdge] 54, 49 \[UndirectedEdge] 43, 
      43 \[UndirectedEdge] 37, 37 \[UndirectedEdge] 36, 
      36 \[UndirectedEdge] 30, 30 \[UndirectedEdge] 41, 
      42 \[UndirectedEdge] 29, 36 \[UndirectedEdge] 42, 
      35 \[UndirectedEdge] 41, 41 \[UndirectedEdge] 47, 
      47 \[UndirectedEdge] 53, 53 \[UndirectedEdge] 54, 
      54 \[UndirectedEdge] 48, 48 \[UndirectedEdge] 42} -> 
    Directive[AbsoluteThickness[2.5], Darker[Red, .2]]}, 
 VertexStyle -> 
  ResourceFunction["WolframPhysicsProjectStyleData"]["SpatialGraph", 
   "VertexStyle"]]

 

To give a sense of scale, though, I have an estimate that says that 10200 times more “activity” in the hypergraph that represents our universe is going into “maintaining the structure of space” than is going into maintaining all the matter we know exists in the universe.

Curvature in Space & Einstein’s Equations

Here are a few structures that simple examples of our rules make:

GraphicsRow
&#10005

 

GraphicsRow[{WolframModel[ {{1, 2, 2}, {1, 3, 4}} -> {{4, 5, 5}, {5, 
      3, 2}, {1, 2, 5}}, {{0, 0, 0}, {0, 0, 0}}, 1000, 
   "FinalStatePlot"], 
  WolframModel[{{1, 1, 2}, {1, 3, 4}} -> {{4, 4, 5}, {5, 4, 2}, {3, 2,
       5}}, {{0, 0, 0}, {0, 0, 0}}, 1000, "FinalStatePlot"], 
  WolframModel[{{1, 1, 2}, {3, 4, 1}} -> {{3, 3, 5}, {2, 5, 1}, {2, 6,
       5}}, {{0, 0, 0}, {0, 0, 0}}, 2000, "FinalStatePlot"]}, 
 ImageSize -> Full]

 

But while all of these look like surfaces, they’re all obviously different. And one way to characterize them is by their local curvature. Well, it turns out that in our models, curvature is a concept closely related to dimension—and this fact will actually be critical in understanding, for example, how gravity arises.

But for now, let’s talk about how one would measure curvature on a hypergraph. Normally the area of a circle is πr2. But let’s imagine that we’ve drawn a circle on the surface of a sphere, and now we’re measuring the area on the sphere that’s inside the circle:

cappedSphere
&#10005

 

cappedSphere[angle_] := 
  Module[{u, v}, 
   With[{spherePoint = {Cos[u] Sin[v], Sin[u] Sin[v], Cos[v]}}, 
    Graphics3D[{First@
         ParametricPlot3D[spherePoint, {v, #1, #2}, {u, 0, 2 \[Pi]}, 
          Mesh -> None, ##3] & @@@ {{angle, \[Pi], 
         PlotStyle -> Lighter[Yellow, .5]}, {0, angle, 
         PlotStyle -> Lighter[Red, .3]}}, 
      First@ParametricPlot3D[
        spherePoint /. v -> angle, {u, 0, 2 \[Pi]}, 
        PlotStyle -> Darker@Red]}, Boxed -> False, 
     SphericalRegion -> False, Method -> {"ShrinkWrap" -> True}]]];
Show[GraphicsRow[Riffle[cappedSphere /@ {0.3, Pi/6, .8}, Spacer[30]]],
  ImageSize -> 250]

 

This area is no longer πr2. Instead it’s π, where a is the radius of the sphere. In other words, as the radius of the circle gets bigger, the effect of being on the sphere is ever more important. (On the surface of the Earth, imagine a circle drawn around the North Pole; once it gets to the equator, it can never get any bigger.)

If we generalize to d dimensions, it turns out the formula for the growth rate of the volume is , where R is a mathematical object known as the Ricci scalar curvature.

So what this all means is that if we look at the growth rates of spherical balls in our hypergraphs, we can expect two contributions: a leading one of order rd that corresponds to effective dimension, and a “correction” of order r2 that represents curvature.

Here’s an example. Instead of giving a flat estimate of dimension (here equal to 2), we have something that dips down, reflecting the positive (“sphere-like”) curvature of the surface:

res = CloudGet
&#10005

 

res = CloudGet["https://wolfr.am/L1ylk12R"];
GraphicsRow[{ResourceFunction["WolframModelPlot"][
   ResourceFunction[
     "WolframModel"][{{1, 2, 3}, {4, 2, 5}} -> {{6, 3, 1}, {3, 6, 
       4}, {1, 2, 6}}, {{0, 0, 0}, {0, 0, 0}}, 800, "FinalState"]], 
  ListLinePlot[res, Frame -> True, 
   PlotStyle -> {Hue[0.9849884156577183, 0.844661839156126, 0.63801], 
     Hue[0.05, 0.9493847125498949, 0.954757], Hue[
     0.0889039442504032, 0.7504362741954692, 0.873304], Hue[
     0.06, 1., 0.8], Hue[0.12, 1., 0.9], Hue[0.08, 1., 1.], Hue[
     0.98654716551403, 0.6728487861309527, 0.733028], Hue[
     0.04, 0.68, 0.9400000000000001], Hue[
     0.9945149844324427, 0.9892162267509705, 0.823529], Hue[
     0.9908289627180552, 0.4, 0.9]}]}]

 

What is the significance of curvature? One thing is that it has implications for geodesics. A geodesic is the shortest distance between two points. In ordinary flat space, geodesics are just lines. But when there’s curvature, the geodesics are curved:

hyperboloidGeodesics
&#10005

 

(*https://www.wolframcloud.com/obj/wolframphysics/TechPaper-Programs/\
Section-04/Geodesics-01.wl*)
CloudGet["https://wolfr.am/L1PH6Rne"];
hyperboloidGeodesics = Table[
Part[
NDSolve[{Sinh[
         2 u[t]] ((2 Derivative[1][u][t]^2 - Derivative[1][v][t]^2)/(
         2 Cosh[2 u[t]])) + Derivative[2][u][t] == 0, ((2 Tanh[
u[t]]) Derivative[1][u][t]) Derivative[1][v][t] + Derivative[2][v][
        t] == 0, u[0] == -0.9, v[0] == v0, u[1] == 0.9, v[1] == v0}, {
     
u[t], 
v[t]}, {t, 0, 1}, MaxSteps -> Infinity], 1], {v0, 
Range[-0.1, 0.1, 0.025]}];
{SphereGeodesics[Range[-.1, .1, .025]], 
 PlaneGeodesics[Range[-.1, .1, .025]], 
 Show[ParametricPlot3D[{Sinh[u], Cosh[u] Sin[v], 
    Cos[v] Cosh[u]}, {u, -1, 1}, {v, -\[Pi]/3, \[Pi]/3}, 
   Mesh -> False, Boxed -> False, Axes -> False, PlotStyle -> color], 
  ParametricPlot3D[{Sinh[u[t]], Cosh[u[t]] Sin[v[t]], 
       Cos[v[t]] Cosh[u[t]]} /. #, {t, 0, 1}, PlotStyle -> Red] & /@ 
   hyperboloidGeodesics, ViewAngle -> 0.3391233203265557`, 
  ViewCenter -> {{0.5`, 0.5`, 0.5`}, {0.5265689095305934`, 
     0.5477310383268459`}}, 
  ViewPoint -> {1.7628482856617167`, 0.21653966523483362`, 
    2.8801868854502355`}, 
  ViewVertical -> {-0.1654573174671554`, 0.1564093539158781`, 
    0.9737350718261054`}]}

 

In the case of positive curvature, bundles of geodesics diverge; for negative curvature they converge. But, OK, even though geodesics were originally defined for continuous space (actually, as the name suggests, for paths on the surface of the Earth), one can also have them in graphs (and hypergraphs). And it’s the same story: the geodesic is the shortest path between two points in the graph (or hypergraph).

Here are geodesics on the “positive-curvature surface” created by one of our rules:

findShortestPath
&#10005

 

findShortestPath[edges_, endpoints : {{_, _} ...}] := 
  FindShortestPath[
     Catenate[Partition[#, 2, 1, 1] & /@ edges], #, #2] & @@@ 
   endpoints;
pathEdges[edges_, path_] := 
  Select[Count[Alternatives @@ path]@# >= 2 &]@edges;
plotGeodesic[edges_, endpoints : {{_, _} ...}, o : OptionsPattern[]] := 
  With[{vertexPaths = findShortestPath[edges, endpoints]}, 
   ResourceFunction["WolframModelPlot"][edges, o, 
    GraphHighlight -> Catenate[vertexPaths], 
    EdgeStyle -> <|
      Alternatives @@ Catenate[pathEdges[edges, #] & /@ vertexPaths] -> 
       Directive[AbsoluteThickness[4], Red]|>]];
plotGeodesic[edges_, endpoints : {__ : Except@List}, 
   o : OptionsPattern[]] := plotGeodesic[edges, {endpoints}, o];
plotGeodesic[
 ResourceFunction[
   "WolframModel"][{{1, 2, 3}, {4, 2, 5}} -> {{6, 3, 1}, {3, 6, 
     4}, {1, 2, 6}}, Automatic, 1000, 
  "FinalState"], {{123, 721}, {24, 552}, {55, 671}}, 
 VertexSize -> 0.12]

 

And here they are for a more complicated structure:

gtest = UndirectedGraph
&#10005

 

(*https://www.wolframcloud.com/obj/wolframphysics/TechPaper-Programs/\
Section-04/Geodesics-01.wl*)

CloudGet["https://wolfr.am/L1PH6Rne"];(*Geodesics*)

gtest = UndirectedGraph[
   Rule @@@ 
    ResourceFunction[
      "WolframModel"][{{x, y}, {x, z}} -> {{x, z}, {x, w}, {y, w}, {z,
         w}}, {{1, 2}, {1, 3}}, 10, "FinalState"], Sequence[
   VertexStyle -> ResourceFunction["WolframPhysicsProjectStyleData"][
     "SpatialGraph", "VertexStyle"], 
    EdgeStyle -> ResourceFunction["WolframPhysicsProjectStyleData"][
     "SpatialGraph", "EdgeLineStyle"]] ];
Geodesics[gtest, #] & /@ {{{79, 207}}, {{143, 258}}}

 

Why are geodesics important? One reason is that in Einstein’s general relativity they’re the paths that light (or objects in “free fall”) follows in space. And in that theory gravity is associated with curvature in space. So when something is deflected going around the Sun, that happens because space around the Sun is curved, so the geodesic the object follows is also curved.

General relativity’s description of curvature in space turns out to all be based on the Ricci scalar curvature R that we encountered above (as well as the slightly more sophisticated Ricci tensor). But so if we want to find out if our models are reproducing Einstein’s equations for gravity, we basically have to find out if the Ricci curvatures that arise from our hypergraphs are the same as the theory implies.

There’s quite a bit of mathematical sophistication involved (for example, we have to consider curvature in space+time, not just space), but the bottom line is that, yes, in various limits, and subject to various assumptions, our models do indeed reproduce Einstein’s equations. (At first, we’re just reproducing the vacuum Einstein equations, appropriate when there’s no matter involved; when we discuss matter, we’ll see that we actually get the full Einstein equations.)

It’s a big deal to reproduce Einstein’s equations. Normally in physics, Einstein’s equations are what you start from (or sometimes they arise as a consistency condition for a theory): here they’re what comes out as an emergent feature of the model.

It’s worth saying a little about how the derivation works. It’s actually somewhat analogous to the derivation of the equations of fluid flow from the limit of the underlying dynamics of lots of discrete molecules. But in this case, it’s the structure of space rather than the velocity of a fluid that we’re computing. It involves some of the same kinds of mathematical approximations and assumptions, though. One has to assume, for example, that there’s enough effective randomness generated in the system that statistical averages work. There is also a whole host of subtle mathematical limits to take. Distances have to be large compared to individual hypergraph connections, but small compared to the whole size of the hypergraph, etc.

It’s pretty common for physicists to “hack through” the mathematical niceties. That’s actually happened for nearly a century in the case of deriving fluid equations from molecular dynamics. And we’re definitely guilty of the same thing here. Which in a sense is another way of saying that there’s lots of nice mathematics to do in actually making the derivation rigorous, and understanding exactly when it’ll apply, and so on.

By the way, when it comes to mathematics, even the setup that we have is interesting. Calculus has been built to work in ordinary continuous spaces (manifolds that locally approximate Euclidean space). But what we have here is something different: in the limit of an infinitely large hypergraph, it’s like a continuous space, but ordinary calculus doesn’t work on it (not least because it isn’t necessarily integer-dimensional). So to really talk about it well, we have to invent something that’s kind of a generalization of calculus, that’s for example capable of dealing with curvature in fractional-dimensional space. (Probably the closest current mathematics to this is what’s been coming out of the very active field of geometric group theory.)

It’s worth noting, by the way, that there’s a lot of subtlety in the precise tradeoff between changing the dimension of space, and having curvature in it. And while we think our universe is three-dimensional, it’s quite possible according to our models that there are at least local deviations—and most likely there were actually large deviations in the early universe.

Time

In our models, space is defined by the large-scale structure of the hypergraph that represents our collection of abstract relations. But what then is time?

For the past century or so, it’s been pretty universally assumed in fundamental physics that time is in a sense “just like space”—and that one should for example lump space and time together and talk about the “spacetime continuum”. And certainly the theory of relativity points in this direction. But if there’s been one “wrong turn” in the history of physics in the past century, I think it’s the assumption that space and time are the same kind of thing. And in our models they’re not—even though, as we’ll see, relativity comes out just fine.

So what then is time? In effect it’s much as we experience it: the inexorable process of things happening and leading to other things. But in our models it’s something much more precise: it’s the progressive application of rules, that continually modify the abstract structure that defines the contents of the universe.

The version of time in our models is in a sense very computational. As time progresses we are in effect seeing the results of more and more steps in a computation. And indeed the phenomenon of computational irreducibility implies that there is something definite and irreducible “achieved” by this process. (And, for example, this irreducibility is what I believe is responsible for the “encrypting” of initial conditions that is associated with the law of entropy increase, and the thermodynamic arrow of time.) Needless to say, of course, our modern computational paradigm did not exist a century ago when “spacetime” was introduced, and perhaps if it had, the history of physics might have been very different.

But, OK, so in our models time is just the progressive application of rules. But there is a subtlety in exactly how this works that might at first seem like a detail, but that actually turns out to be huge, and in fact turns out to be the key to both relativity and quantum mechanics.

At the beginning of this piece, I talked about the rule

{{xy}, {xz}}  {{xz}, {xw}, {yw}, {zw}}

RulePlot
&#10005

 

RulePlot[ResourceFunction[
   "WolframModel"][{{x, y}, {x, z}} -> {{x, z}, {x, w}, {y, w}, {z, 
     w}}], VertexLabels -> Automatic, "RulePartsAspectRatio" -> 0.55]

 

and showed the “first few steps” in applying it

ResourceFunction
&#10005

 

ResourceFunction["WolframModelPlot"] /@ 
 ResourceFunction[
   "WolframModel"][{{x, y}, {x, z}} -> {{x, z}, {x, w}, {y, w}, {z, 
     w}}, {{1, 2}, {2, 3}, {3, 4}, {2, 4}}, 4, "StatesList"]

 

But how exactly did the rule get applied? What is “inside” these steps? The rule defines how to take two connections in the hypergraph (which in this case is actually just a graph) and transform them into four new connections, creating a new element in the process. So each “step” that we showed before actually consists of several individual “updating events” (where here newly added connections are highlighted, and ones that are about to be removed are dashed):

With
&#10005

 

With[{eo = 
   ResourceFunction[
     "WolframModel"][{{x, y}, {x, z}} -> {{x, z}, {x, w}, {y, w}, {z, 
       w}}, {{1, 2}, {2, 3}, {3, 4}, {2, 4}}, 4]}, 
 TakeList[eo["EventsStatesPlotsList", ImageSize -> 130], 
  eo["GenerationEventsCountList", 
   "IncludeBoundaryEvents" -> "Initial"]]]

 

But now, here is the crucial point: this is not the only sequence of updating events consistent with the rule. The rule just says to find two adjacent connections, and if there are several possible choices, it says nothing about which one. And a crucial idea in our model is in a sense just to do all of them.

We can represent this with a graph that shows all possible paths:

CloudGet
&#10005

 

CloudGet["https://wolfr.am/LmHho8Tr"]; (*newgraph*)newgraph[
 Graph[ResourceFunction["MultiwaySystem"][
   "WolframModel" -> {{{x, y}, {x, z}} -> {{x, z}, {x, w}, {y, w}, {z,
         w}}}, {{{1, 2}, {2, 3}, {3, 4}, {2, 4}}}, 3, "StatesGraph", 
   VertexSize -> 3, PerformanceGoal -> "Quality"], 
  AspectRatio -> 1/2], {3, 0.7}]

 

For the very first update, there are two possibilities. Then for each of the results of these, there are four additional possibilities. But at the next update, something important happens: two of the branches merge. In other words, even though we have done a different sequence of updates, the outcome is the same.

Things rapidly get complicated. Here is the graph after one more update, now no longer trying to show a progression down the page:

Graph
&#10005

 

Graph[ResourceFunction["MultiwaySystem"][
  "WolframModel" -> {{{x, y}, {x, z}} -> {{x, z}, {x, w}, {y, w}, {z, 
       w}}}, {{{1, 2}, {2, 3}, {3, 4}, {2, 4}}}, 4, "StatesGraph", 
  VertexSize -> 3, PerformanceGoal -> "Quality"]]

 

So how does this relate to time? What it says is that in the basic statement of the model there is not just one path of time; there are many paths, and many “histories”. But the model—and the rule that is used—determines all of them. And we have seen a hint of something else: that even if we might think we are following an “independent” path of history, it may actually merge with another path.

It will take some more discussion to explain how this all works. But for now let me say that what will emerge is that time is about causal relationships between things, and that in fact, even when the paths of history that are followed are different, these causal relationships can end up being the same—and that in effect, to an observer embedded in the system, there is still just a single thread of time.

The Graph of Causal Relationships

In the end it’s wonderfully elegant. But to get to the point where we can understand the elegant bigger picture we need to go through some detailed things. (It isn’t terribly surprising that a fundamental theory of physics—inevitably built on very abstract ideas—is somewhat complicated to explain, but so it goes.)

To keep things tolerably simple, I’m not going to talk directly about rules that operate on hypergraphs. Instead I’m going to talk about rules that operate on strings of characters. (To clarify: these are not the strings of string theory—although in a bizarre twist of “pun-becomes-science” I suspect that the continuum limit of the operations I discuss on character strings is actually related to string theory in the modern physics sense.)

OK, so let’s say we have the rule:

{A  BBB, BB  A}

This rule says that anywhere we see an A, we can replace it with BBB, and anywhere we see BB we can replace it with A. So now we can generate what we call the multiway system for this rule, and draw a “multiway graph” that shows everything that can happen:

ResourceFunction
&#10005

 

ResourceFunction["MultiwaySystem"][{"A" -> "BBB", 
  "BB" -> "A"}, {"A"}, 8, "StatesGraph"]

 

At the first step, the only possibility is to use ABBB to replace the A with BBB. But then there are two possibilities: replace either the first BB or the second BB—and these choices give different results. On the next step, though, all that can be done is to replace the A—in both cases giving BBBB.

So in other words, even though we in a sense had two paths of history that diverged in the multiway system, it took only one step for them to converge again. And if you trace through the picture above you’ll find out that’s what always happens with this rule: every pair of branches that is produced always merges, in this case after just one more step.

This kind of balance between branching and merging is a phenomenon I call “causal invariance”. And while it might seem like a detail here, it actually turns out that it’s at the core of why relativity works, why there’s a meaningful objective reality in quantum mechanics, and a host of other core features of fundamental physics.

But let’s explain why I call the property causal invariance. The picture above just shows what “state” (i.e. what string) leads to what other one. But at the risk of making the picture more complicated (and note that this is incredibly simple compared to the full hypergraph case), we can annotate the multiway graph by including the updating events that lead to each transition between states:

LayeredGraphPlot
&#10005

 

LayeredGraphPlot[
 ResourceFunction["MultiwaySystem"][{"A" -> "BBB", 
   "BB" -> "A"}, {"A"}, 8, "EvolutionEventsGraph"], AspectRatio -> 1]

 

But now we can ask the question: what are the causal relationships between these events? In other words, what event needs to happen before some other event can happen? Or, said another way, what events must have happened in order to create the input that’s needed for some other event?

Let us go even further, and annotate the graph above by showing all the causal dependencies between events:

LayeredGraphPlot
&#10005

 

LayeredGraphPlot[
 ResourceFunction["MultiwaySystem"][{"A" -> "BBB", 
   "BB" -> "A"}, {"A"}, 7, "EvolutionCausalGraph"], AspectRatio -> 1]

 

The orange lines in effect show which event has to happen before which other event—or what all the causal relationships in the multiway system are. And, yes, it’s complicated. But note that this picture shows the whole multiway system—with all possible paths of history—as well as the whole network of causal relationships within and between these paths.

But here’s the crucial thing about causal invariance: it implies that actually the graph of causal relationships is the same regardless of which path of history is followed. And that’s why I originally called this property “causal invariance”—because it says that with a rule like this, the causal properties are invariant with respect to different choices of the sequence in which updating is done.

And if one traced through the picture above (and went quite a few more steps), one would find that for every path of history, the causal graph representing causal relationships between events would always be:

ResourceFunction
&#10005

 

ResourceFunction["SubstitutionSystemCausalGraph"][{"A" -> "BBB", 
   "BB" -> "A"}, "A", 10] // LayeredGraphPlot

 

or, drawn differently,

ResourceFunction
&#10005

 

ResourceFunction["SubstitutionSystemCausalGraph"][{"A" -> "BBB", 
  "BB" -> "A"}, "A", 12]

 

The Importance of Causal Invariance

To understand more about causal invariance, it’s useful to look at an even simpler example: the case of the rule BAAB. This rule says that any time there’s a B followed by an A in a string, swap these characters around. In other words, this is a rule that tries to sort a string into alphabetical order, two characters at a time.

Let’s say we start with BBBAAA. Then here’s the multiway graph that shows all the things that can happen according to the rule:

Graph
&#10005

 

Graph[ResourceFunction["MultiwaySystem"][{"BA" -> "AB"}, "BBBAAA", 12,
    "EvolutionEventsGraph"], AspectRatio -> 1.5] // LayeredGraphPlot

 

There are lots of different paths that can be followed, depending on which BA in the string the rule is applied to at each step. But the important thing we see is that at the end all the paths merge, and we get a single final result: the sorted string AAABBB. And the fact that we get this single final result is a consequence of the causal invariance of the rule. In a case like this where there’s a final result (as opposed to just evolving forever), causal invariance basically says: it doesn’t matter what order you do all the updates in; the result you’ll get will always be the same.

I’ve introduced causal invariance in the context of trying to find a model of fundamental physics—and I’ve said that it’s going to be critical to both relativity and quantum mechanics. But actually what amounts to causal invariance has been seen before in various different guises in mathematics, mathematical logic and computer science. (Its most common name is “confluence”, though there are some technical differences between this and what I call causal invariance.)

Think about expanding out an algebraic expression, like (x + (1 + x)2)(x + 2)2

(x+\!\(\*SuperscriptBox[\((1+x)\),\(2\)]\))\!\(\*SuperscriptBox[\((x+2)\),\(2\)]\)). You could expand one of the powers first, then multiply things out. Or you could multiply the terms first. It doesn’t matter what order you do the steps in; you’ll always get the same canonical form (which in this case Mathematica tells me is 4 + 16x + 17x2 + 7x3 + x4

4+16x+\!\(\*SuperscriptBox[\(17x\),\(2\)]\)+\!\(\*SuperscriptBox[\(7x\),\(3\)]\)\!\(\*SuperscriptBox[\(x\),\(4\)]\)). And this independence of orders is essentially causal invariance.

 

Here’s one more example. Imagine you’ve got some recursive definition, say f[n_]:=f[n-1]+f[n-2]

f[n_]:=f[n-1]+f[n-2] (with f[0]=f[1]=1

f[0]=f[1]=1). Now evaluate f[10]

f[10]. First you get f[9]+f[8]

f[9]+f[8]. But what do you do next? Do you evaluate f[9]

f[9], or f[8]

f[8]? And then what? In the end, it doesn’t matter; you’ll always get 55. And this is another example of causal invariance.

 

When one thinks about parallel or asynchronous algorithms, it’s important if one has causal invariance. Because it means one can do things in any order—say, depth-first, breadth-first, or whatever—and one will always get the same answer. And that’s what’s happening in our little sorting algorithm above.

OK, but now let’s come back to causal relationships. Here’s the multiway system for the sorting process annotated with all causal relationships for all paths:

Magnify
&#10005

 

Magnify[LayeredGraphPlot[
  ResourceFunction["MultiwaySystem"][{"BA" -> "AB"}, "BBBAAA", 12, 
   "EvolutionCausalGraph"], AspectRatio -> 1.5], .6]

 

And, yes, it’s a mess. But because there’s causal invariance, we know something very important: this is basically just a lot of copies of the same causal graph—a simple grid:

centeredRange
&#10005

 

centeredRange[n_] := # - Mean@# &@Range@n;
centeredLayer[n_] := {#, n} & /@ centeredRange@n;
diamondLayerSizes[layers_?OddQ] := 
  Join[#, Reverse@Most@#] &@Range[(layers + 1)/2];
diamondCoordinates[layers_?OddQ] := 
  Catenate@MapIndexed[
    Thread@{centeredRange@#, (layers - First@#2)/2} &, 
    diamondLayerSizes[layers]];
diamondGraphLayersCount[graph_] := 2 Sqrt[VertexCount@graph] - 1;
With[{graph = 
   ResourceFunction["SubstitutionSystemCausalGraph"][{"BA" -> "AB"}, 
    "BBBBAAAA", 12]}, 
 Graph[graph, 
  VertexCoordinates -> 
   diamondCoordinates@diamondGraphLayersCount@graph, VertexSize -> .2]]

 

(By the way—as the picture suggests—the cross-connections between these copies aren’t trivial, and later on we’ll see they’re associated with deep relations between relativity and quantum mechanics, that probably manifest themselves in the physics of black holes. But we’ll get to that later…)

OK, so every different way of applying the sorting rule is supposed to give the same causal graph. So here’s one example of how we might apply the rule starting with a particular initial string:

evo = (SeedRandom
&#10005

 

evo = (SeedRandom[2424]; 
   ResourceFunction[
     "SubstitutionSystemCausalEvolution"][{"BA" -> "AB"}, 
    "BBAAAABAABBABBBBBAAA", 15, {"Random", 4}]);
ResourceFunction["SubstitutionSystemCausalPlot"][evo, 
 EventLabels -> False, CellLabels -> True, CausalGraph -> False]

 

But now let’s show the graph of causal connections. And we see it’s just a grid:

evo = (SeedRandom
&#10005

 

evo = (SeedRandom[2424]; 
   ResourceFunction[
     "SubstitutionSystemCausalEvolution"][{"BA" -> "AB"}, 
    "BBAAAABAABBABBBBBAAA", 15, {"Random", 4}]);
ResourceFunction["SubstitutionSystemCausalPlot"][evo, 
 EventLabels -> False, CellLabels -> False, CausalGraph -> True]

 

Here are three other possible sequences of updates:

SeedRandom
&#10005

 

SeedRandom[242444]; GraphicsRow[
 Table[ResourceFunction["SubstitutionSystemCausalPlot"][
   ResourceFunction[
     "SubstitutionSystemCausalEvolution"][{"BA" -> "AB"}, 
    "BBAAAABAABBABBBBBAAA", 15, {"Random", 4}], EventLabels -> False, 
   CellLabels -> False, CausalGraph -> True], 3], ImageSize -> Full]

 

But now we see causal invariance in action: even though different updates occur at different times, the graph of causal relationships between updating events is always the same. And having seen this—in the context of a very simple example—we’re ready to talk about special relativity.

Deriving Special Relativity

It’s typical first instinct in thinking about doing science: you imagine doing an experiment on a system, but you—as the “observer”—are outside the system. Of course if you’re thinking about modeling the whole universe and everything in it, this isn’t ultimately a reasonable way to think about things. Because the “observer” is inevitably part of the universe, and so has to be modeled just like everything else.

In our models what this means is that the “mind of the observer”, just like everything else in the universe, has to get updated through a series of updating events. There’s no absolute way for the observer to “know what’s going on in the universe”; all they ever experience is a series of updating events, that may happen to be affected by updating events occurring elsewhere in the universe. Or, said differently, all the observer can ever observe is the network of causal relationships between events—or the causal graph that we’ve been talking about.

So as toy model let’s look at our BAAB rule for strings. We might imagine that the string is laid out in space. But to our observer the only thing they know is the causal graph that represents causal relationships between events. And for the BAAB system here’s one way we can draw that:

CloudGet
&#10005

 

CloudGet["https://wolfr.am/KVkTxvC5"]; (*regularCausalGraphPlot*)

CloudGet["https://wolfr.am/KVl97Tf4"];(*lorentz*)
\
regularCausalGraphPlot[10, {0, 0}, {0.0, 0.0}, lorentz[0]]

 

But now let’s think about how observers might “experience” this causal graph. Underneath, an observer is getting updated by some sequence of updating events. But even though that’s “really what’s going on”, to make sense of it, we can imagine our observers setting up internal “mental” models for what they see. And a pretty natural thing for observers like us to do is just to say “one set of things happens all across the universe, then another, and so on”. And we can translate this into saying that we imagine a series of “moments” in time, where things happen “simultaneously” across the universe—at least with some convention for defining what we mean by simultaneously. (And, yes, this part of what we’re doing is basically following what Einstein did when he originally proposed special relativity.)

Here’s a possible way of doing it:

CloudGet
&#10005

 

CloudGet["https://wolfr.am/KVkTxvC5"]; (*regularCausalGraphPlot*)

CloudGet["https://wolfr.am/KVl97Tf4"];(*lorentz*)
\
regularCausalGraphPlot[10, {1, 0}, {0.0, 0.0}, lorentz[0]]

 

One can describe this as a “foliation” of the causal graph. We’re dividing the causal graph into leaves or slices. And each slice our observers can consider to be a “successive moment in time”.

It’s important to note that there are some constraints on the foliation we can pick. The causal graph defines what event has to happen before what. And if our observers are going to have a chance of making sense of the world, it had better be the case that their notion of the progress of time aligns with what the causal graph says. So for example this foliation wouldn’t work—because basically it says that the time we assign to events is going to disagree with the order in which the causal graph says they have to happen:

CloudGet
&#10005

 

CloudGet["https://wolfr.am/KVkTxvC5"]; (*regularCausalGraphPlot*)

CloudGet["https://wolfr.am/KVl97Tf4"];(*lorentz*)
\
regularCausalGraphPlot[6, {.2, 0}, {5, 0.0}, lorentz[0]]

 

But, so given the foliation above, what actual order of updating events does it imply? It basically just says: as many events as possible happen at the same time (i.e. in the same slice of the foliation), as in this picture:

ResourceFunction["SubstitutionSystemCausalPlot"]
&#10005

 

(*https://www.wolframcloud.com/obj/wolframphysics/TechPaper-Programs/\
Section-08/BoostedEvolution.wl*)
CloudGet["https://wolfr.am/LbaDFVSn"]; (*boostedEvolution*) \
ResourceFunction["SubstitutionSystemCausalPlot"][
  boostedEvolution[
   ResourceFunction[
     "SubstitutionSystemCausalEvolution"][{"BA" -> "AB"}, 
    StringRepeat["BA", 10], 10], 0], EventLabels -> False, 
  CellLabels -> True, CausalGraph -> False]

 

OK, now let’s connect this to physics. The foliation we had above is relevant to observers who are somehow “stationary with respect to the universe” (the “cosmological rest frame”). One can imagine that as time progresses, the events a particular observer experiences are ones in a column going vertically down the page:

CloudGet
&#10005

 

CloudGet["https://wolfr.am/KVkTxvC5"]; (*regularCausalGraphPlot*)

CloudGet["https://wolfr.am/KVl97Tf4"];(*lorentz*)
\
regularCausalGraphPlot[5, {1, 0.01}, {0.0, 0.0}, {1.5, 0}, {Red, 
  Directive[Dotted, Thick, Red]}, lorentz[0]]

 

But now let’s think about an observer who is uniformly moving in space. They’ll experience a different sequence of events, say:

CloudGet
&#10005

 

CloudGet["https://wolfr.am/KVkTxvC5"]; (*regularCausalGraphPlot*)

CloudGet["https://wolfr.am/KVl97Tf4"];(*lorentz*)
\
regularCausalGraphPlot[5, {1, 0.01}, {0.0, 0.3}, {0.6, 0}, {Red, 
  Directive[Dotted, Thick, Red]}, lorentz[0]]

 

And that means that the foliation they’ll naturally construct will be different. From the “outside” we can draw it on the causal graph like this:

CloudGet
&#10005

 

CloudGet["https://wolfr.am/KVkTxvC5"]; (*regularCausalGraphPlot*)

CloudGet["https://wolfr.am/KVl97Tf4"];(*lorentz*)
\
regularCausalGraphPlot[10, {1, 0.01}, {0.3, 0.3}, {0, 0}, {Red, 
  Directive[Dotted, Thick, Red]}, lorentz[0.]]

 

But to the observer each slice just represents a successive moment of time. And they don’t have any way to know how the causal graph was drawn. So they’ll construct their own version, where the slices are horizontal:

CloudGet
&#10005

 

CloudGet["https://wolfr.am/KVkTxvC5"]; (*regularCausalGraphPlot*)

CloudGet["https://wolfr.am/KVl97Tf4"];(*lorentz*)
\
regularCausalGraphPlot[10, {1, 0.01}, {0.3, 0.3}, {0, 0}, {Red, 
  Directive[Dotted, Thick, Red]}, lorentz[0.3]]

 

But now there’s a purely geometrical fact: to make this rearrangement, while preserving the basic structure (and here, angles) of the causal graph, each moment of time has to sample fewer events in the causal graph, by a factor of  where β is the angle that represents the velocity of the observer.

If you know about special relativity, you’ll recognize a lot of this. What we’ve been calling foliations correspond directly to relativity’s “reference frames”. And our foliations that represent motion are the standard inertial reference frames of special relativity.

But here’s the special thing that’s going on here: we can interpret all this discussion of foliations and reference frames in terms of the actual rules and evolution of our underlying system. So here now is the evolution of our string-sorting system in the “boosted reference frame” corresponding to an observer going at a certain speed:

CloudGet
&#10005

 

(*https://www.wolframcloud.com/obj/wolframphysics/TechPaper-Programs/\
Section-08/BoostedEvolution.wl*)
CloudGet["https://wolfr.am/LbaDFVSn"]; (*boostedEvolution*) \
ResourceFunction["SubstitutionSystemCausalPlot"][
  boostedEvolution[
   ResourceFunction[
     "SubstitutionSystemCausalEvolution"][{"BA" -> "AB"}, 
    StringRepeat["BA", 10], 10], 0.3], EventLabels -> False, 
  CellLabels -> True, CausalGraph -> False]

 

And here’s the crucial point: because of causal invariance it doesn’t matter that we’re in a different reference frame—the causal graph for the system (and the way it eventually sorts the string) is exactly the same.

In special relativity, the key idea is that the “laws of physics” work the same in all inertial reference frames. But why should that be true? Well, in our systems, there’s an answer: it’s a consequence of causal invariance in the underlying rules. In other words, from the property of causal invariance, we’re able to derive relativity.

Normally in physics one puts in relativity by the way one sets up the mathematical structure of spacetime. But in our models we don’t start from anything like this, and in fact space and time are not even at all the same kind of thing. But what we can now see is that—because of causal invariance—relativity emerges in our models, with all the relationships between space and time that that implies.

So, for example, if we look at the picture of our string-sorting system above, we can see relativistic time dilation. In effect, because of the foliation we picked, time operates slower. Or, said another way, in the effort to sample space faster, our observer experiences slower updating of the system in time.

The speed of light c in our toy system is defined by the maximum rate at which information can propagate, which is determined by the rule, and in the case of this rule is one character per step. And in terms of this, we can then say that our foliation corresponds to a speed 0.3 c. But now we can look at the amount of time dilation, and it’s exactly the amount  that relativity says it should be.

By the way, if we imagine trying to make our observer go “faster than light”, we can see that can’t work. Because there’s no way to tip the foliation at more than 45° in our picture, and still maintain the causal relationships implied by the causal graph.

OK, so in our toy model we can derive special relativity. But here’s the thing: this derivation isn’t specific to the toy model; it applies to any rule that has causal invariance. So even though we may be dealing with hypergraphs, not strings, and we may have a rule that shows all kinds of complicated behavior, if it ultimately has causal invariance, then (with various technical caveats, mostly about possible wildness in the causal graph) it will exhibit relativistic invariance, and a physics based on it will follow special relativity.

What Is Energy? What Is Mass?

In our model, everything in the universe—space, matter, whatever—is supposed to be represented by features of our evolving hypergraph. So within that hypergraph, is there a way to identify things that are familiar from current physics, like mass, or energy?

I have to say that although it’s a widespread concept in current physics, I’d never thought of energy as something fundamental. I’d just thought of it as an attribute that things (atoms, photons, whatever) can have. I never really thought of it as something that one could identify abstractly in the very structure of the universe.

So it came as a big surprise when we recently realized that actually in our model, there is something we can point to, and say “that’s energy!”, independent of what it’s the energy of. The technical statement is: energy corresponds to the flux of causal edges through spacelike hypersurfaces. And, by the way, momentum corresponds to the flux of causal edges through timelike hypersurfaces.

OK, so what does this mean? First, what’s a spacelike hypersurface? It’s actually a standard concept in general relativity, for which there’s a direct analogy in our models. Basically it’s what forms a slice in our foliation. Why is it called what it’s called? We can identify two kinds of directions: spacelike and timelike.

A spacelike direction is one that involves just moving in space—and it’s a direction where one can always reverse and go back. A timelike direction is one that involves also progressing through time—where one can’t go back. We can mark spacelike () and timelike () hypersurfaces in the causal graph for our toy model:

CloudGet
&#10005

 

CloudGet["https://wolfr.am/KVkTxvC5"]; (*regularCausalGraphPlot*)

CloudGet["https://wolfr.am/KVl97Tf4"];(*lorentz*)
\
regularCausalGraphPlot[10, {1, 0.5}, {0., 0.}, {-0.5, 0}, {Red, 
  Directive[Dashed, Red]}, lorentz[0.]]

 

(They might be called “surfaces”, except that “surfaces” are usually thought of as 2-dimensional, and our 3-space + 1-time dimensional universe, these foliation slices are 3-dimensional: hence the term “hypersurfaces”.)

OK, now let’s look at the picture. The “causal edges” are the causal connections between events, shown in the picture as lines joining the events. So when we talk about a “flux of causal edges through spacelike hypersurfaces”, what we’re talking about is the net number of causal edges that go down through the horizontal slices in the pictures.

In the toy model that’s trivial to see. But here’s a causal graph from a simple hypergraph model, where it’s already considerably more complicated:

Graph
&#10005

 

Graph[ResourceFunction[
   "WolframModel"][  {{x, y}, {z, y}} -> {{x, z}, {y, z}, {w, 
     z}}, {{0, 0}, {0, 0}}, 15, "LayeredCausalGraph"], 
 AspectRatio -> 1/2]

 

(Our toy-model causal graph starts from a line of events because we set up a long string as the initial condition; this starts from a single event because it’s starting from a minimal initial condition.)

But when we put a foliation on this causal graph (thereby effectively defining our reference frame) we can start counting how many causal edges go down through successive (“spacelike”) slices:

foliationLines
&#10005

 

foliationLines[{lineDensityHorizontal_ : 1, 
    lineDensityVertical_ : 1}, {tanHorizontal_ : 0.0, 
    tanVertical_ : 0.0}, offset : {_, _} : {0, 0}, 
   lineStyles : {_, _} : {Red, Red}, 
   transform_ : (# &)] := {If[lineDensityHorizontal != 0, 
    Style[Table[
      Line[transform /@ {{-100 + First@offset, 
          k - 100 tanHorizontal + Last@offset}, {100 + First@offset, 
          k + 100 tanHorizontal + Last@offset}}], {k, -100.5, 100.5, 
       1/lineDensityHorizontal}], First@lineStyles], {}], 
   If[lineDensityVertical != 0, 
    Style[Table[
      Line[transform /@ {{k - 100 tanVertical + First@offset, -100 + 
           Last@offset}, {k + 100 tanVertical + First@offset, 
          100 + Last@offset}}], {k, -100.5, 100.5, 
       1/lineDensityVertical}], Last@lineStyles], {}]};
ResourceFunction[
   "WolframModel"][{{x, y}, {z, y}} -> {{x, z}, {y, z}, {w, z}}, {{0, 
    0}, {0, 0}}, 15]["LayeredCausalGraph", AspectRatio -> 1/2, 
 Epilog -> 
  foliationLines[{0.44, 0}, {0, 
    0}, {0, -0.5}, {Directive[Red, Opacity[0.2]], Red}]]

 

We can also ask how many causal edges go “sideways”, through timelike hypersurfaces:

foliationLines
&#10005

 

foliationLines[{lineDensityHorizontal_ : 1, 
    lineDensityVertical_ : 1}, {tanHorizontal_ : 0.0, 
    tanVertical_ : 0.0}, offset : {_, _} : {0, 0}, 
   lineStyles : {_, _} : {Red, Red}, 
   transform_ : (# &)] := {If[lineDensityHorizontal != 0, 
    Style[Table[
      Line[transform /@ {{-100 + First@offset, 
          k - 100 tanHorizontal + Last@offset}, {100 + First@offset, 
          k + 100 tanHorizontal + Last@offset}}], {k, -100.5, 100.5, 
       1/lineDensityHorizontal}], First@lineStyles], {}], 
   If[lineDensityVertical != 0, 
    Style[Table[
      Line[transform /@ {{k - 100 tanVertical + First@offset, -100 + 
           Last@offset}, {k + 100 tanVertical + First@offset, 
          100 + Last@offset}}], {k, -100.5, 100.5, 
       1/lineDensityVertical}], Last@lineStyles], {}]};
ResourceFunction[
   "WolframModel"][{{x, y}, {z, y}} -> {{x, z}, {y, z}, {w, z}}, {{0, 
    0}, {0, 0}}, 15]["LayeredCausalGraph", AspectRatio -> 1/2, 
 Epilog -> 
  foliationLines[{0, 1/3}, {0, 0}, {2.1, 
    0}, {Directive[Red, Opacity[0.5]], 
    Directive[Dotted, Opacity[0.7], Red]}]]

 

OK, so why do we think these fluxes of edges correspond to energy and momentum? Imagine what happens if we change our foliation, say tipping it to correspond to motion at some velocity, as we did in the previous section. It takes a little bit of math, but what we find out is that our fluxes of causal edges transform with velocity basically just like we saw distance and time transform in the previous section.

In the standard derivation of relativistic mechanics, there’s a consistency argument that energy has to transform with velocity like time does, and momentum like distance. But now we actually have a structural reason for this to be the case. It’s a fundamental consequence of our whole setup, and of causal invariance. In traditional physics, one often says that position is the conjugate variable to momentum, and energy to time. And that’s something that’s burnt into the mathematical structure of the theory. But here it’s not something we’re burning in; it’s something we’re deriving from the underlying structure of our model.

And that means there’s ultimately a lot more we can say about it. For example, we might wonder what the “zero of energy” is. After all, if we look at one of our causal graphs, a lot of the causal edges are really just going into “maintaining the structure of space”. So if in a sense space is uniform, there’s inevitably a uniform “background flux” of causal edges associated with that. And whatever we consider to be “energy” corresponds to the fluctuations of that flux around its background value.

By the way, it’s worth mentioning what a “flux of causal edges” corresponds to. Each causal edge represents a causal connection between events, that is in a sense “carried” by some element in the underlying hypergraph (the “spatial hypergraph”). So a “flux of causal edges” is in effect the communication of activity (i.e. events), either in time (i.e. through spacelike hypersurfaces) or in space (i.e. through timelike hypersurfaces). And at least in some approximation we can then say that energy is associated with activity in the hypergraph that propagates information through time, while momentum is associated with activity that propagates information in space.

There’s a fundamental feature of our causal graphs that we haven’t mentioned yet—that’s related to information propagation. Start at any point (i.e. any event) in a causal graph. Then trace the causal connections from that event. You’ll get some kind of cone (here just in 2D):

CloudGet
&#10005

 

CloudGet["https://wolfr.am/KVl97Tf4"];(*lorentz*)

foliationLines[{lineDensityHorizontal_ : 1, 
    lineDensityVertical_ : 1}, {tanHorizontal_ : 0.0, 
    tanVertical_ : 0.0}, offset : {_, _} : {0, 0}, 
   lineStyles : {_, _} : {Red, Red}, 
   transform_ : (# &)] := {If[lineDensityHorizontal != 0, 
    Style[Table[
      Line[transform /@ {{-100 + First@offset, 
          k - 100 tanHorizontal + Last@offset}, {100 + First@offset, 
          k + 100 tanHorizontal + Last@offset}}], {k, -100.5, 100.5, 
       1/lineDensityHorizontal}], First@lineStyles], {}], 
   If[lineDensityVertical != 0, 
    Style[Table[
      Line[transform /@ {{k - 100 tanVertical + First@offset, -100 + 
           Last@offset}, {k + 100 tanVertical + First@offset, 
          100 + Last@offset}}], {k, -100.5, 100.5, 
       1/lineDensityVertical}], Last@lineStyles], {}]};
squareCausalGraphPlot[
   layerCount_ : 9, {lineDensityHorizontal_ : 1, 
    lineDensityVertical_ : 1}, {tanHorizontal_ : 0.0, 
    tanVertical_ : 0.0}, offset : {_, _} : {0, 0}, 
   lineStyles : {_, _} : {Red, Red}, transform_ : (# &)] := 
  NeighborhoodGraph[
   DirectedGraph[
    Flatten[Table[{v[{i + 1, j}] -> v[{i, j}], 
       v[{i + 1, j + 1}] -> v[{i, j}]}, {i, layerCount - 1}, {j, 
       1 + Round[-layerCount/2 + i/2], (layerCount + i)/2}]], 
    VertexCoordinates -> 
     Catenate[
      Table[v[{i, j}] -> 
        transform[{2 (#2 - #1/2), #1} & @@ {i, j}], {i, 
        layerCount + 1}, {j, 
        1 + Round[-layerCount/2 + i/2] - 1, (layerCount + i)/2 + 1}]],
     VertexSize -> .33, 
    VertexStyle -> 
     Directive[Directive[Opacity[.7], Hue[0.14, 0.34, 1.]], 
      EdgeForm[Directive[Opacity[0.4], Hue[0.09, 1., 0.91]]]], 
    VertexShapeFunction -> "Rectangle", 
    Epilog -> 
     foliationLines[{lineDensityHorizontal, 
       lineDensityVertical}, {tanHorizontal, tanVertical}, offset, 
      lineStyles, transform]], v[{1, 1}], 9];
With[{graph = 
   squareCausalGraphPlot[
    10, {0, 0}, {0., 0.}, {-0.5, 0}, {Red, Directive[Dotted, Red]}, 
    lorentz[0.]]}, 
 Graph[graph, 
  VertexStyle -> {Directive[
     Directive[Opacity[.7], Hue[0.14, 0.34, 1.]], 
     EdgeForm[Directive[Opacity[0.4], Hue[0.09, 1., 0.91]]]], 
    Alternatives @@ VertexOutComponent[graph, v[{9, 5}]] -> 
     Directive[Directive[Opacity[.6], Hue[0, 0.45, 0.87]], EdgeForm[
Hue[0, 1, 0.48]]]}]]

 

The cone is more complicated in a more complicated causal graph. But you’ll always have something like it. And what it corresponds to physically is what’s normally called a light cone (or “forward light cone”). Assuming we’ve drawn our causal network so that events are somehow laid out in space across the page, then the light cone will show how information (as transmitted by light) can spread in space with time.

When the causal graph gets complicated, the whole setup with light cones gets complicated, as we’ll discuss for example in connection with black holes later. But for now, we can just say there are cones in our causal graph, and in effect the angle of these cones represents the maximum rate of information propagation in the system, which we can identify with the physical speed of light.

And in fact, not only can we identify light cones in our causal graph: in some sense we can think of our whole causal graph as just being a large number of “elementary light cones” all knitted together. And, as we mentioned, much of the structure that’s built necessarily goes into, in effect, “maintaining the structure of space”.

But let’s look more closely at our light cones. There are causal edges on their boundaries that in effect correspond to propagation at the speed of light—and that, in terms of the underlying hypergraph, correspond to events that “reach out” in the hypergraph, and “entrain” new elements as quickly as possible. But what about causal edges that are “more vertical”? These causal edges are associated with events that in a sense reuse elements in the hypergraph, without involving new ones.

And it looks like these causal edges have an important interpretation: they are associated with mass (or, more specifically, rest mass). OK, so the total flux of causal edges through spacelike hypersurfaces corresponds to energy. And now we’re saying that the flux of causal edges specifically in the timelike direction corresponds to rest mass. We can see what happens if we “tip our reference” frames just a bit, say corresponding to a velocity v  c. Again, there’s a small amount of math, but it’s pretty easy to derive formulas for momentum (p) and energy (E). The speed of light c comes into the formulas because it defines the ratio of “horizontal” (i.e. spacelike) to “vertical” (i.e timelike) distances on the causal graph. And for v small compared to c we get:

So from these formulas we can see that just by thinking about causal graphs (and, yes, with a backdrop of causal invariance, and a whole host of detailed mathematical limit questions that we’re not discussing here), we’ve managed to derive a basic (and famous) fact about the relation between energy and mass:

Sometimes in the standard formalism of physics, this relation by now seems more like a definition than something to derive. But in our model, it’s not just a definition, and in fact we can successfully derive it.

General Relativity & Gravity

Earlier on, we talked about how curvature of space can arise in our models. But at that point we were just talking about “empty space”. Now we can go back and also talk about how curvature interacts with mass and energy in space.

In our earlier discussion, we talked about constructing spherical balls by starting at some point in the hypergraph, and then following all possible sequences of r connections. But now we can do something directly analogous in the causal graph: start at some point, and follow possible sequences of t connections. There’s quite a bit of mathematical trickiness, but essentially this gets us “volumes of light cones”.

If space is effectively -dimensional, then to a first approximation this volume will grow like . But like in the spatial case, there’s a correction term, this time proportional to the so-called Ricci tensor . (The actual expression is roughly  where the  are timelike vectors, etc.)

OK, but we also know something else about what is supposed to be inside our light cones: not only are there “background connections” that maintain the structure of space, there are also “additional” causal edges that are associated with energy, momentum and mass. And in the limit of a large causal graph, we can identify the density of these with the so-called energy-momentum tensor . So in the end we have two contributions to the “volumes” of our light cones: one from “pure curvature” and one from energy-momentum.

Again, there’s some math involved. But the main thing is to think about the limit when we’re looking at a very large causal graph. What needs to be true for us to have -dimensional space, as opposed to something much wilder? This puts a constraint on the growth rates of our light cone volumes, and when one works everything out, it implies that the following equation must hold:

But this is exactly Einstein’s equation for the curvature of space when matter with a certain energy-momentum is present. We’re glossing over lots of details here. But it’s still, in my view, quite spectacular: from the basic structure of our very simple models, we’re able to derive a fundamental result in physics: the equation that for more than a hundred years has passed every test in describing the operation of gravity.

There’s a footnote here. The equation we’ve just given is without a so-called cosmological term. And how that works is bound up with the question of what the zero of energy is, which in our model relates to what features of the evolving hypergraph just have to do with the “maintenance of space”, and what have to do with “things in space” (like matter).

In existing physics, there’s an expectation that even in the “vacuum” there’s actually a formally infinite density of pairs of virtual particles associated with quantum mechanics. Essentially what’s happening is that there are always pairs of particles and antiparticles being created, that annihilate quickly, but that in aggregate contribute a huge effective energy density. We’ll discuss how this relates to quantum mechanics in our models later. But for now let’s just recall that particles (like electrons) in our models basically correspond to locally stable structures in the hypergraph.

But when we think about how “space is maintained” it’s basically through all sorts of seemingly random updating events in the hypergraph. But in existing physics (or, specifically, quantum field theory) we’re basically expected to analyze everything in terms of (virtual) particles. So if we try to do that with all these random updating events, it’s not surprising that we end up saying that there are these infinite collections of things going on. (Yes, this can be made much more precise; I’m just giving an outline here.)

But as soon as we say this, there is an immediate problem: we’re saying that there’s a formally infinite—or at least huge—energy density that must exist everywhere in the universe. But if we then apply Einstein’s equation, we’ll conclude that this must produce enough curvature to basically curl the universe up into a tiny ball.

One way to get out of this is to introduce a so-called cosmological term, that’s just an extra term in the Einstein equations, and then posit that this term is sized so as to exactly cancel (yes, to perhaps one part in 1060 or more) the energy density from virtual particles. It’s certainly not a pretty solution.

But in our models, the situation is quite different. It’s not that we have virtual particles “in space”, that are having an effect on space. It’s that the same stuff that corresponds to the virtual particles is actually “making the space”, and maintaining its structure. Of course, there are lots of details about this—which no doubt depend on the particular underlying rule. But the point is that there’s no longer a huge mystery about why “vacuum energy” doesn’t basically destroy our universe: in effect, it’s because it’s what’s making our universe.

Black Holes, Singularities, etc.

One of the big predictions of general relativity is the existence of black holes. So how do things like that work in our models? Actually, it’s rather straightforward. The defining feature of a black hole is the existence of an event horizon: a boundary that light signals can’t cross, and where in effect causal connection is broken.

In our models, we can explicitly see that happen in the causal graph. Here’s an example:

ResourceFunction
&#10005

 

ResourceFunction[
   "WolframModel"][{{0, 1}, {0, 2}, {0, 3}} -> {{1, 2}, {3, 2}, {3, 
     4}, {4, 3}, {4, 4}}, {{0, 0}, {0, 0}, {0, 0}}, 20, 
  "CausalGraph"] // LayeredGraphPlot

 

At the beginning, everything is causally connected. But at some point the causal graph splits—and there’s an event horizon. Events happening on one side can’t influence ones on the other, and so on. And that’s how a region of the universe can “causally break off” to form something like a black hole.

But actually, in our models, the “breaking off” can be even more extreme. Not only can the causal graph split; the spatial hypergraph can actually throw off disconnected pieces—each of which in effect forms a whole “separate universe”:

Framed
&#10005

 

Framed[ResourceFunction["WolframModelPlot"][#, 
    ImageSize -> {UpTo[100], UpTo[60]}], FrameStyle -> LightGray] & /@ 
 ResourceFunction[
   "WolframModel"][{{1, 2, 3}, {4, 5, 3}} -> {{2, 6, 4}, {6, 1, 
     2}, {4, 2, 1}}, {{0, 0, 0}, {0, 0, 0}}, 20, "StatesList"]

 

By the way, it’s interesting to look at what happens to the foliations observers make when there’s an event horizon. Causal invariance says that paths in the causal graph that diverge should always eventually merge. But if the paths go into different disconnected pieces of the causal graph, that can’t ever happen. So how does an observer deal with that? Well, basically they have to “freeze time”. They have to have a foliation where successive time slices just pile up, and never enter the disconnected pieces.

It’s just like what happens in general relativity. To an observer far from the black hole, it’ll seem to take an infinite time for anything to fall into the black hole. For now, this is just a phenomenon associated with the structure of space. But later we’ll see that it’s also the direct analog of something completely different: the process of measurement in quantum mechanics.

Coming back to gravity: we can ask questions not only about event horizons, but also about actual singularities in spacetime. In our models, these are places where lots of paths in a causal graph converge to a single point. And in our models, we can immediately study questions like whether there’s always an event horizon associated with any singularity (the “cosmic censorship hypothesis”).

We can ask about other strange phenomena from general relativity. For example, there are closed timelike curves, sometimes viewed as allowing time travel. In our models, closed timelike curves are inconsistent with causal invariance. But we can certainly invent rules that produce them. Here’s an example:

Graph
&#10005

 

Graph[ResourceFunction["MultiwaySystem"][{"AB" -> "BAB", 
   "BA" -> "A"}, "ABA", 4, "StatesGraph"], 
 GraphLayout -> {"LayeredDigraphEmbedding", "RootVertex" -> "ABA"}]

 

We start from one “initial” state in this multiway system. But as we go forward we can enter a loop where we repeatedly visit the same state. And this loop also occurs in the causal graph. We think we’re “going forward in time”. But actually we’re just in a loop, repeatedly returning to the same state. And if we tried to make a foliation where we could describe time as always advancing, we just wouldn’t be able to do it.

Cosmology

In our model, the universe can start as a tiny hypergraph—perhaps a single self-loop. But then—as the rule gets applied—it progressively expands. With some particularly simple rules, the total size of the hypergraph has to just uniformly increase; with others it can fluctuate.

But even if the size of the hypergraph is always increasing, that doesn’t mean we’d necessarily notice. It could be that essentially everything we can see just expands too—so in effect the granularity of space is just getting finer and finer. This would be an interesting resolution to the age-old debate about whether the universe is discrete or continuous. Yes, it’s structurally discrete, but the scale of discreteness relative to our scale is always getting smaller and smaller. And if this happens fast enough, we’d never be able to “see the discreteness”—because every time we tried to measure it, the universe would effectively have subdivided before we got the result. (Somehow it’d be like the ultimate calculus epsilon-delta proof: you challenge the universe with an epsilon, and before you can get the result, the universe has made a smaller delta.)

There are some other strange possibilities too. Like that the whole hypergraph for the universe is always expanding, but pieces are continually “breaking off”, effectively forming black holes of different sizes, and allowing the “main component” of the universe to vary in size.

But regardless of how this kind of expansion works in our universe today, it’s clear that if the universe started with a single self-loop, it had to do a lot of expanding, at least early on. And here there’s an interesting possibility that’s relevant for understanding cosmology.

Just because our current universe exhibits three-dimensional space, in our models there’s no reason to think that the early universe necessarily also did. There are very different things that can happen in our models:

ResourceFunction
&#10005

 

ResourceFunction["WolframModel"][#1, #2, #3, 
   "FinalStatePlot"] & @@@ {{{{1, 2, 3}, {4, 5, 6}, {2, 6}} -> {{7, 
      7, 2}, {6, 2, 8}, {8, 5, 7}, {8, 9, 3}, {1, 6}, {10, 6}, {5, 
      3}, {7, 11}}, {{0, 0, 0}, {0, 0, 0}, {0, 0}}, 
   16}, {{{1, 2, 3}, {1, 4, 5}, {3, 6}} -> {{7, 8, 7}, {7, 5, 6}, {9, 
      5, 5}, {1, 7, 4}, {7, 5}, {5, 10}, {11, 6}, {6, 9}}, {{0, 0, 
     0}, {0, 0, 0}, {0, 0}}, 
   100}, {{{1, 2, 3}, {3, 4}} -> {{5, 5, 5}, {5, 6, 4}, {3, 1}, {1, 
      5}}, {{0, 0, 0}, {0, 0}}, 16}}

 

In the first example here, different parts of space effectively separate into non-communicating “black hole” tree branches. In the second example, we have something like ordinary—in this case 2-dimensional—space. But in the third example, space is in a sense very connected. If we work out the volume of a spherical ball, it won’t grow like rd; it’ll grow exponentially with r (e.g. like 2r).

If we look at the causal graph, we’ll see that you can effectively “go everywhere in space”, or affect every event, very quickly. It’d be as if the speed of light is infinite. But really it’s because space is effectively infinite dimensional.

In typical cosmology, it’s been quite mysterious how different parts of the early universe managed to “communicate” with each other, for example, to smooth out perturbations. But if the universe starts effectively infinite-dimensional, and only later “relaxes” to being finite-dimensional, that’s no longer a mystery.

So, OK, what might we see in the universe today that would reflect what happened extremely early in its history? The fact that our models deterministically generate behavior that seems for all practical purposes random means that we can expect that most features of the initial conditions or very early stages of the universe will quickly be “encrypted”, and effectively not reconstructable.

But it’s just conceivable that something like a breaking of symmetry associated with the first few hypergraphs might somehow survive. And that suggests the bizarre possibility that—just maybe—something like the angular structure of the cosmic microwave background or the very large-scale distribution of galaxies might reflect the discrete structure of the very early universe. Or, in other words, it’s just conceivable that what amounts to the rule for the universe is, in effect, painted across the whole sky. I think this is extremely unlikely, but it’d certainly be an amazing thing if the universe were “self-documenting” that way.

Elementary Particles—Old and New

We’ve talked several times about particles like electrons. In current physics theories, the various (truly) elementary particles—the quarks, the leptons (electron, muon, neutrinos, etc.), the gauge bosons, the Higgs—are all assumed to intrinsically be point particles, of zero size. In our models, that’s not how it works. The particles are all effectively “little lumps of space” that have various special properties.

My guess is that the precise list of what particles exist will be something that’s specific to a particular underlying rule. In cellular automata, for example, we’re used to seeing complicated sets of possible localized structures arise:

SeedRandom
&#10005

 

SeedRandom[2525]; ArrayPlot[
 CellularAutomaton[110, RandomInteger[1, 700], 500], 
 ImageSize -> Full, Frame -> None]

 

In our hypergraphs, the picture will inevitably be somewhat different. The “core feature” of each particle will be some kind of locally stable structure in the hypergraph (a simple analogy might be that it’s a lump of nonplanarity in an otherwise planar graph). But then there’ll be lots of causal edges associated with the particle, defining its particular energy and momentum.

Still, the “core feature” of the particles will presumably define things like their charge, quantum numbers, and perhaps spin—and the fact that these things are observed to occur in discrete units may reflect the fact that it’s a small piece of hypergraph that’s involved in defining them.

It’s not easy to know what the actual scale of discreteness in space might be in our models. But a possible (though potentially unreliable) estimate might be that the “elementary length” is around 10–93 meters. (Note that that’s very small compared to the Planck length ~10–35 meters that arises essentially from dimensional analysis.) And with this elementary length, the radius of the electron might be 10–81 meters. Tiny, but not zero. (Note that current experiments only tell us that the size of the electron is less than about 10–22 meters.)

One feature of our models is that there should be a “quantum of mass”—a discrete amount that all masses, for example of particles, are multiples of. With our estimate for the elementary length, this quantum of mass would be small, perhaps 10–30, or 1036 times smaller than the mass of the electron.

And this raises an intriguing possibility. Perhaps the particles—like electrons—that we currently know about are the “big ones”. (With our estimates, an electron would have  hypergraph elements in it.) And maybe there are some much smaller, and much lighter ones. At least relative to the particles we currently know, such particles would have few hypergraph elements in them—so I’m referring to them as “oligons” (after the Greek word ὀλιγος for “few”).

What properties would these oligons have? They’d probably interact very very weakly with other things in the universe. Most likely lots of oligons would have been produced in the very early universe, but with their very weak interactions, they’d soon “drop out of thermal equilibrium”, and be left in large numbers as relics—with energies that become progressively lower as the universe expands around them.

So where might oligons be now? Even though their other interactions would likely be exceptionally weak, they’d still be subject to gravity. And if their energies end up being low enough, they’d basically collect in gravity wells around the universe—which means in and around galaxies.

And that’s interesting—because right now there’s quite a mystery about the amount of mass seen in galaxies. There appears to be a lot of “dark matter” that we can’t see but that has gravitational effects. Well, maybe it’s oligons. Maybe even lots of different kinds of oligons: a whole shadow physics of much lighter particles.

The Inevitability of Quantum Mechanics

“But how will you ever get quantum mechanics?”, physicists would always ask me when I would describe earlier versions of my models. In many ways, quantum mechanics is the pinnacle of existing physics. It’s always had a certain “you-are-not-expected-to-understand-this” air, though, coupled with “just-trust-the-mathematical-formalism”. And, yes, the mathematical formalism has worked well—really well—in letting us calculate things. (And it almost seems more satisfying because the calculations are often so hard; indeed, hard enough that they’re what first made me start using computers to do mathematics 45 years ago.)

Our usual impression of the world is that definite things happen. And before quantum mechanics, classical physics typically captured this in laws—usually equations—that would tell one what specifically a system would do. But in quantum mechanics the formalism involves any particular system doing lots of different things “in parallel”, with us just seeing samples—ultimately with certain probabilities—of these possibilities.

And as soon as one hears of a model in which there are definite rules, one might assume that it could never reproduce quantum mechanics. But, actually, in our models, quantum mechanics is not just possible; it’s absolutely inevitable. And, as we’ll see, in something I consider quite beautiful, the core of what leads to it turns out to be the same as what leads to relativity.

OK, so how does this work? Let’s go back to what we discussed when we first started talking about time. In our models there’s a definite rule for updates to make in our hypergraphs, say:

RulePlot
&#10005

 

RulePlot[ResourceFunction[
   "WolframModel"][{{x, y}, {x, z}} -> {{y, z}, {y, w}, {z, w}, {x, 
     w}}], VertexLabels -> Automatic, "RulePartsAspectRatio" -> 0.6]

 

But if we’ve got a hypergraph like this:

ResourceFunction
&#10005

 

ResourceFunction[
  "WolframModel"][{{x, y}, {x, z}} -> {{y, z}, {y, w}, {z, w}, {x, 
    w}}, {{0, 0}, {0, 0}}, 6, "FinalStatePlot"]

 

there will usually be many places where this rule can be applied. So which update should we do first? The model doesn’t tell us. But let’s just imagine all the possibilities. The rule tells us what they all are—and we can represent them (as we discussed above) as a multiway system—here illustrated using the simpler case of strings rather than hypergraphs:

ResourceFunction
&#10005

 

ResourceFunction["MultiwaySystem"][{"A" -> "AB", 
  "B" -> "A"}, {"A"}, 6, "StatesGraph"]

 

Each node in this graph now represents a complete state of our system (a hypergraph in our actual models). And each node is joined by arrows to the state or states that one gets by applying a single update to it.

If our model had been operating “like classical physics” we would expect it to progress in time from one state to another, say like this:

ResourceFunction
&#10005

 

ResourceFunction["GenerationalMultiwaySystem"][{"A" -> "AB", 
  "B" -> "A"}, {"A"}, 5, "StatesGraph"]

 

But the crucial point is that the structure of our models leaves us no choice but to consider multiway systems. The form of the whole multiway system is completely determined by the rules. But—in a way that is already quite reminiscent of the standard formalism of quantum mechanics—the multiway system defines many different possible paths of history.

But now there is a mystery. If there are always all these different possible paths of history, how is it that we ever think that definite things happen in the world? This has been a core mystery of quantum mechanics for a century. It turns out that if one’s just using quantum mechanics to do calculations, the answer basically doesn’t matter. But if one wants to “really understand what’s going on” in quantum mechanics, it’s something that definitely does matter.

And the exciting thing is that in our models, there’s an obvious resolution. And actually it’s based on the exact same phenomenon—causal invariance—that gives us relativity.

Here’s roughly how this works. The key point is to think about what an observer who is themselves part of the multiway system will conclude about the world. Yes, there are different possible paths of history. But—just as in our discussion of relativity—the only aspect of them that an observer will ever be aware of is the causal relationships between the events they involve. But the point is that—even though when looked at from “outside” the paths are different—causal invariance implies that the network of relationships between causal events (which is all that’s relevant when one’s inside the system) will always be exactly the same.

In other words—much as in the case of relativity—even though from outside the system there may seem to be many possible “threads of time”, from inside the system causal invariance implies that there’s in a sense ultimately just one thread of time, or, in effect, one objective reality.

How does this all relate to the detailed standard formalism of quantum mechanics? It’s a little complicated. But let me make at least a few comments here. (There’s some more detail in my technical documentJonathan Gorard has given even more.)

The states in the multiway system can be thought of as possible states of the quantum system. But how do we characterize how observers experience them? In particular, which states is the observer aware of when? Just like in the relativity case, the observer can in a sense make a choice of how they define time. One possibility might be through a foliation of the multiway system like this:

Graph
&#10005

 

Graph[ResourceFunction["MultiwaySystem"][{"A" -> "AB", 
   "B" -> "A"}, {"A"}, 6, "StatesGraph"], AspectRatio -> 1/2, 
 Epilog -> {ResourceFunction["WolframPhysicsProjectStyleData"][
    "BranchialGraph", "EdgeStyle"], AbsoluteThickness[1.5], 
   Table[Line[{{-8, i}, {10, i}}], {i, 1/2, 6 + 1/2}]}]

 

In the formalism of quantum mechanics, one can then say that at each time, the observer experiences a superposition of possible states of the system. But now there’s a critical point. In direct analogy to the case of relativity, there are many different possible choices the observer can make about how to define time—and each of them corresponds to a different foliation of the multiway graph.

Again by analogy to relativity, we can then think of these choices as what we can call different “quantum observation frames”. Causal invariance implies that as long they respect the causal relationships in the graph, these frames can basically be set up in any way we want. In talking about relativity, it was useful to just have “tipped parallel lines” (“inertial frames”) representing observers who are moving uniformly in space.

In talking about quantum mechanics, other frames are useful. In particular, in the standard formalism of quantum mechanics, it’s common to talk about “quantum measurement”: essentially the act of taking a quantum system and determining some definite (essentially classical) outcome from it. Well, in our setup, a quantum measurement basically corresponds to a particular quantum observation frame.

Here’s an example:

With[{graph = Graph[ResourceFunction["MultiwaySystem"]
&#10005

 

(*https://www.wolframcloud.com/obj/wolframphysics/TechPaper-Programs/\
Section-08/QM-foliations-01.wl*)

CloudGet["https://wolfr.am/LbdPPaXZ"]; Magnify[
  With[{graph = 
     Graph[ResourceFunction["MultiwaySystem"][{"A" -> "AB"}, {"AA"}, 
       7, "StatesGraph"], 
      VertexShapeFunction -> {Alternatives @@ 
          VertexList[
           ResourceFunction[
             "GenerationalMultiwaySystem"][{"A" -> "AB"}, {"AA"}, 5, 
            "StatesGraph"]] -> (Text[
            Framed[Style[stripMetadata[#2] , Hue[0, 1, 0.48]], 
             Background -> 
              Directive[Opacity[.6], Hue[0, 0.45, 0.87]],	
             FrameMargins -> {{2, 2}, {0, 0}}, RoundingRadius -> 0, 
             FrameStyle -> 
              Directive[Opacity[0.5], 
               Hue[0, 0.52, 0.8200000000000001]]], #1, {0, 0}] &)}, 
      VertexCoordinates -> (Thread[
           VertexList[#] -> GraphEmbedding[#, Automatic, 2]] &[
         ResourceFunction["MultiwaySystem"][{"A" -> "AB"}, {"AA"}, 8, 
          "StatesGraph"]])]}, 
   Show[graph, 
    foliationGraphics[graph, #, {0.1, 0.05}, 
       Directive[Hue[0.89, 0.97, 0.71], 
        AbsoluteThickness[1.5]]] & /@ {{{"AA"}}, {{
       "AA", "AAB", "ABA"}}, {{
       "AA", "AAB", "ABA", "AABB", "ABAB", "ABBA"}}, {{
       "AA", "AAB", "ABA", "AABB", "ABAB", "ABBA", "AABBB", "ABABB", 
        "ABBAB", "ABBBA"}}, {{
       "AA", "AAB", "ABA", "AABB", "ABAB", "ABBA", "AABBB", "ABABB", 
        "ABBAB", "ABBBA", "AABBBB", "ABABBB", "ABBABB", "ABBBAB", 
        "ABBBBA"}, {
       "AA", "AAB", "ABA", "AABB", "ABAB", "ABBA", "AABBB", "ABABB", 
        "ABBAB", "ABBBA", "AABBBB", "ABABBB", "ABBABB", "ABBBAB", 
        "ABBBBA", "AABBBBB", "ABABBBB", "ABBBBAB", "ABBBBBA"}, {
       "AA", "AAB", "ABA", "AABB", "ABAB", "ABBA", "AABBB", "ABABB", 
        "ABBAB", "ABBBA", "AABBBB", "ABABBB", "ABBABB", "ABBBAB", 
        "ABBBBA", "AABBBBB", "ABABBBB", "ABBBBAB", "ABBBBBA", 
        "AABBBBBB", "ABABBBBB", "ABBBBBAB", "ABBBBBBA"}, {
       "AA", "AAB", "ABA", "AABB", "ABAB", "ABBA", "AABBB", "ABABB", 
        "ABBAB", "ABBBA", "AABBBB", "ABABBB", "ABBABB", "ABBBAB", 
        "ABBBBA", "AABBBBB", "ABABBBB", "ABBBBAB", "ABBBBBA", 
        "AABBBBBB", "ABABBBBB", "ABBBBBAB", "ABBBBBBA", "AABBBBBBB", 
        "ABABBBBBB", "ABBBBBBAB", "ABBBBBBBA"}}}]], 0.9]

 

The successive pink lines effectively mark off what the observer is considering to be successive moments in time. So when all the lines bunch up below the state ABBABB what it means is that the observer is effectively choosing to “freeze time” for that state. In other words, the observer is saying “that’s the state I consider the system to be in, and I’m sticking to it”. Or, put another way, even though in the full multiway graph there’s all sorts of other “quantum mechanical” evolution of states going on, the observer has set up their quantum observation frame so that they pick out just a particular, definite, classical-like outcome.

OK, but can they consistently do that? Well, that depends on the actual underlying structure of the multiway graph, which ultimately depends on the actual underlying rule. In the example above, we’ve set up a foliation (i.e. a quantum observation frame) that does the best possible job in this rule at “freezing time” for the ABBABB state. But just how long can this “reality distortion field” be maintained?

The only way to keep the foliation consistent in the multiway graph above is to have it progressively expand over time. In other words, to keep time frozen, more and more quantum states have to be pulled into the “reality distortion field”, and so there’s less and less coherence in the system.

The picture above is for a very trivial rule. Here’s a corresponding picture for a slightly more realistic case:

Show[drawFoliation[
&#10005

 

(*https://www.wolframcloud.com/obj/wolframphysics/TechPaper-Programs/\
Section-08/QM-foliations-01.wl*)
CloudGet["https://wolfr.am/LbdPPaXZ"];
Show[drawFoliation[
  Graph[ResourceFunction["MultiwaySystem"][{"A" -> "AB", 
     "B" -> "A"}, {"A"}, 6, "StatesGraph"], 
   VertexShapeFunction -> {Alternatives @@ 
       VertexList[
        ResourceFunction["GenerationalMultiwaySystem"][{"A" -> "AB", 
          "B" -> "A"}, {"A"}, 5, "StatesGraph"]] -> (Text[
         Framed[Style[stripMetadata[#2] , Hue[0, 1, 0.48]], 
          Background -> Directive[Opacity[.2], Hue[0, 0.45, 0.87]],	
          FrameMargins -> {{2, 2}, {0, 0}}, RoundingRadius -> 0, 
          FrameStyle -> 
           Directive[Opacity[0.5], 
            Hue[0, 0.52, 0.8200000000000001]]], #1, {0, 
          0}] &)}], {{"A", "AB", "AA", "ABB", "ABA"}, {"A", "AB", 
    "AA", "ABB", "ABA", "AAB", "ABBB"}, {"A", "AB", "AA", "ABB", 
    "ABA", "AAB", "ABBB", "AABB", "ABBBB"}}, {0.1, 0}, 
  Directive[Hue[0.89, 0.97, 0.71], AbsoluteThickness[1.5]]], 
 Graphics[{Directive[Hue[0.89, 0.97, 0.71], AbsoluteThickness[1.5]], 
   AbsoluteThickness[1.6`], 
   Line[{{-3.35, 4.05}, {-1.85, 3.3}, {-0.93, 2.35}, {-0.93, 
      1.32}, {0.23, 1.32}, {0.23, 2.32}, {2.05, 2.32}, {2.05, 
      1.51}, {1.15, 1.41}, {1.15, 0.5}, {2.15, 0.5}, {2.25, 
      1.3}, {4.3, 1.3}, {4.6, 0.5}, {8.6, 0.5}}]}]]

 

And what we see here is that—even in this still incredibly simplified case—the structure of the multiway system will force the observer to construct a more and more elaborate foliation if they are to successfully freeze time. Measurement in quantum mechanics has always involved a slightly uncomfortable mathematical idealization—and this now gives us a sense of what’s really going on. (The situation is ultimately very similar to the problem of decoding “encrypted” thermodynamic initial conditions that I mentioned above.)

Quantum measurement is really about what an observer perceives. But if you are for example trying to construct a quantum computer, it’s not just a question of having a qubit be perceived as being maintained in a particular state; it actually has to be maintained in that state. And for this to be the case we actually have to freeze time for that qubit. But here’s a very simplified example of how that can happen in a multiway graph:

Show[With[{graph = Graph[ResourceFunction["MultiwaySystem"]
&#10005

 

(*https://www.wolframcloud.com/obj/wolframphysics/TechPaper-Programs/\
Section-08/QM-foliations-01.wl*)
\
CloudGet["https://wolfr.am/LbdPPaXZ"]; Magnify[
 Show[With[{graph = 
     Graph[ResourceFunction["MultiwaySystem"][{"A" -> "AB", 
        "XABABX" -> "XXXX"}, {"XAAX"}, 6, "StatesGraph"], 
      VertexCoordinates -> 
       Append[(Thread[
            VertexList[#] -> GraphEmbedding[#, Automatic, 2]] &[
          ResourceFunction["MultiwaySystem"][{"A" -> "AB", 
            "XABABX" -> "XXXX"}, {"XAAX"}, 8, "StatesGraph"]]), 
        "XXXX" -> {0, 5.5}]]}, 
   Show[graph, 
    foliationGraphics[graph, #, {0.1, 0.05}, 
       Directive[Hue[0.89, 0.97, 0.71], AbsoluteThickness[1.5]]] & /@ {
Sequence[{{"XAAX"}}, {{"XAAX", "XAABX", "XABAX"}}, {{
        "XAAX", "XAABX", "XABAX", "XAABBX", "XABABX", "XABBAX"}}, {{
        "XAAX", "XAABX", "XABAX", "XAABBX", "XABABX", "XABBAX", 
         "XAABBBX", "XABABBX", "XABBABX", "XABBBAX"}}, {{
        "XAAX", "XAABX", "XABAX", "XAABBX", "XABABX", "XABBAX", 
         "XAABBBX", "XABABBX", "XABBABX", "XABBBAX", "XAABBBBX", 
         "XABABBBX", "XABBABBX", "XABBBABX", "XABBBBAX"}, {
        "XAAX", "XAABX", "XABAX", "XAABBX", "XABABX", "XABBAX", 
         "XAABBBX", "XABABBX", "XABBABX", "XABBBAX", "XAABBBBX", 
         "XABABBBX", "XABBABBX", "XABBBABX", "XABBBBAX", "XAABBBBBX", 
         "XABABBBBX", "XABBBBABX", "XABBBBBAX", "XABBABBBX", 
         "XABBBABBX"}}, {}, {}]}]]], .6]

 

All this discussion of “freezing time” might seem weird, and not like anything one usually talks about in physics. But actually, there’s a wonderful connection: the freezing of time we’re talking about here can be thought of as happening because we’ve got the analog in the space of quantum states of a black hole in physical space.

The picture above makes it plausible that we’ve got something where things can go in, but if they do, they always get stuck. But there’s more to it. If you’re an observer far from a black hole, then you’ll never actually see anything fall into the black hole in finite time (that’s why black holes are called “frozen stars” in Russian). And the reason for this is precisely because (according to the mathematics) time is frozen at the event horizon of the black hole. In other words, to successfully make a qubit, you effectively have to isolate it in quantum space like things get isolated in physical space by the presence of the event horizon of a black hole.

General Relativity and Quantum Mechanics Are the Same Idea!

General relativity and quantum mechanics are the two great foundational theories of current physics. And in the past it’s often been a struggle to reconcile them. But one of the beautiful outcomes of our project so far has been the realization that at some deep level general relativity and quantum mechanics are actually the same idea. It’s something that (at least so far) is only clear in the context of our models. But the basic point is that both theories are consequences of causal invariance—just applied in different situations.

Recall our discussion of causal graphs in the context of relativity above. We drew foliations and said that if we looked at a particular slice, it would tell us the arrangement of the system in space at what we consider to be a particular time. So now let’s look at multiway graphs. We saw in the previous section that in quantum mechanics we’re interested in foliations of these. But if we look at a particular slice in one of these foliations, what does it represent? The foliation has got a bunch of states in it. And it turns out that we can think of them as being laid out in an abstract kind of space that we’re calling “branchial space”.

To make sense of this space, we have to have a way to say what’s near what. But actually the multiway graph gives us that. Take a look at this multiway graph:

foliationLines
&#10005

 

foliationLines[{lineDensityHorizontal_ : 1, 
    lineDensityVertical_ : 1}, {tanHorizontal_ : 0.0, 
    tanVertical_ : 0.0}, offset : {_, _} : {0, 0}, 
   lineStyles : {_, _} : {Red, Red}, 
   transform_ : (# &)] := {If[lineDensityHorizontal != 0, 
    Style[Table[
      Line[transform /@ {{-100 + First@offset, 
          k - 100 tanHorizontal + Last@offset}, {100 + First@offset, 
          k + 100 tanHorizontal + Last@offset}}], {k, -100.5, 100.5, 
       1/lineDensityHorizontal}], First@lineStyles], {}], 
   If[lineDensityVertical != 0, 
    Style[Table[
      Line[transform /@ {{k - 100 tanVertical + First@offset, -100 + 
           Last@offset}, {k + 100 tanVertical + First@offset, 
          100 + Last@offset}}], {k, -100.5, 100.5, 
       1/lineDensityVertical}], Last@lineStyles], {}]};
LayeredGraphPlot[
 ResourceFunction["MultiwaySystem"][{"A" -> "AB", "B" -> "A"}, "A", 5,
   "EvolutionGraph"], 
 Epilog -> 
  foliationLines[{1, 0}, {0, 0}, {0, 
    0}, {ResourceFunction["WolframPhysicsProjectStyleData"][
     "BranchialGraph", "EdgeStyle"], 
    ResourceFunction["WolframPhysicsProjectStyleData"][
     "BranchialGraph", "EdgeStyle"]}]]

 

At each slice in the foliation, let’s draw a graph where we connect two states whenever they’re both part of the same “branch pair”, so that—like AA and ABB here—they both come from the same state on the slice before. Here are the graphs we get by doing this for successive slices:

Table
&#10005

 

Table[ResourceFunction["MultiwaySystem"][{"A" -> "AB", "B" -> "A"}, 
  "A", t, If[t <= 5, "BranchialGraph", 
   "BranchialGraphStructure"]], {t, 2, 8}]

 

We call these branchial graphs. And we can think of them as representing the correlation—or entanglement—of quantum states. Two states that are nearby in the graph are highly entangled; those further away, less so. And we can imagine that as our system evolves, we’ll get larger and larger branchial graphs, until eventually, just like for our original hypergraphs, we can think of these graphs as limiting to something like a continuous space.

But what is this space like? For our original hypergraphs, we imagined that we’d get something like ordinary physical space (say close to three-dimensional Euclidean space). But branchial space is something more abstract—and much wilder. And typically it won’t even be finite-dimensional. (It might approximate a projective Hilbert space.) But we can still think of it mathematically as some kind of space.

OK, things are getting fairly complicated here. But let me try to give at least a flavor of how things work. Here’s an example of a wonderful correspondence: curvature in physical space is like the uncertainty principle of quantum mechanics. Why do these have anything to do with each other?

The uncertainty principle says that if you measure, say, the position of something, then its momentum, you’ll get a different answer than if you do it in the opposite order. But now think about what happens when you try to make a rectangle in physical space by going in direction  first, then , and then you do these in the opposite order. In a flat space, you’ll get to the same place. But in a curved space, you won’t:

parallelTransportOnASphere
&#10005

 

parallelTransportOnASphere[size_] := 
  Module[{\[Phi], \[Theta]}, 
   With[{spherePoint = {Cos[\[Phi]] Sin[\[Theta]], 
       Sin[\[Phi]] Sin[\[Theta]], Cos[\[Theta]]}}, 
    Graphics3D[{{Lighter[Yellow, .2], Sphere[]}, 
      First@ParametricPlot3D[
        spherePoint /. \[Phi] -> 0, {\[Theta], \[Pi]/2, \[Pi]/2 - 
          size}, PlotStyle -> Darker@Red], 
      Rotate[First@
        ParametricPlot3D[
         spherePoint /. \[Phi] -> 0, {\[Theta], \[Pi]/2, \[Pi]/2 - 
           size}, PlotStyle -> Darker@Red], \[Pi]/2, {-1, 0, 0}], 
      Rotate[First@
        ParametricPlot3D[
         spherePoint /. \[Phi] -> 0, {\[Theta], \[Pi]/2, \[Pi]/2 - 
           size}, PlotStyle -> Darker@Red], size, {0, 0, 1}], 
      Rotate[Rotate[
        First@ParametricPlot3D[
          spherePoint /. \[Phi] -> 0, {\[Theta], \[Pi]/2, \[Pi]/2 - 
            size}, PlotStyle -> Darker@Red], \[Pi]/2, {-1, 0, 0}], 
       size, {0, -1, 0}]}, Boxed -> False, SphericalRegion -> False, 
     Method -> {"ShrinkWrap" -> True}, ViewPoint -> {2, size, size}]]];
parallelTransportOnASphere[0 | 0.] := 
  parallelTransportOnASphere[1.*^-10];
parallelTransportOnASphere[0.7]

 

And essentially what’s happening in the uncertainty principle is that you’re doing exactly this, but in branchial space, rather than physical space. And it’s because branchial space is wild—and effectively very curved—that you get the uncertainty principle.

Alright, so the next question might be: what’s the analog of the Einstein equations in branchial space? And again, it’s quite wonderful: at least in some sense, the answer is that it’s the path integral—the fundamental mathematical construct of modern quantum mechanics and quantum field theory.

This is again somewhat complicated. But let me try to give a flavor of it. Just as we discussed geodesics as describing paths traversed through physical space in the course of time, so also we can discuss geodesics as describing paths traversed through branchial space in the course of time. In both cases these geodesics are determined by curvature in the corresponding space. In the case of physical space, we argued (roughly) that the presence of excess causal edges—corresponding to energy—would lead to what amounts to curvature in the spatial hypergraph, as described by Einstein’s equations.

OK, so what about branchial space? Just like for the spatial hypergraph, we can think about the causal connections between the updating events that define the branchial graph. And we can once again imagine identifying the flux of causal edges—now not through spacelike hypersurfaces, but through branchlike ones—as corresponding to energy. And—much like in the spatial hypergraph case—an excess of these causal edges will have the effect of producing what amounts to curvature in branchial space (or, more strictly, in branchtime—the analog of spacetime). But this curvature will then affect the geodesics that traverse branchial space.

In general relativity, the presence of mass (or energy) causes curvature in space which causes the paths of geodesics to turn—which is what is normally interpreted as the action of the force of gravity. But now we have an analog in quantum mechanics, in our branchial space. The presence of energy effectively causes curvature in branchial space which causes the paths of geodesics through branchial space to turn.

What does turning correspond to? Basically it’s exactly what the path integral talks about. The path integral (and the usual formalism of quantum mechanics) is set up in terms of complex numbers. But it can just as well be thought of in terms of turning through an angle. And that’s exactly what’s happening with our geodesics in branchial space. In the path integral there’s a quantity called the action—which is a kind of relativistic analog of energy—and when one works things out more carefully, our fluxes of causal edges correspond to the action, but are also exactly what determine the rate of turning of geodesics.

It all fits together beautifully. In physical space we have Einstein’s equations—the core of general relativity. And in branchial space (or, more accurately, multiway space) we have Feynman’s path integral—the core of modern quantum mechanics. And in the context of our models they’re just different facets of the same idea. It’s an amazing unification that I have to say I didn’t see coming; it’s something that just emerged as an inevitable consequence of our simple models of applying rules to collections of relations, or hypergraphs.

Branchial Motion and the Entanglement Horizon

We can think of motion in physical space as like the process of exploring new elements in the spatial hypergraph, and potentially becoming affected by them. But now that we’re talking about branchial space, it’s natural to ask whether there’s something like motion there too. And the answer is that there is. And it’s basically exactly the same kind of thing: but instead of exploring new elements in the spatial hypergraph, we’re exploring new elements in the branchial graph, and potentially becoming affected by them.

There’s a way of talking about it in the standard language of quantum mechanics: as we move in branchial space, we’re effectively getting “entangled” with more and more quantum states.

OK, so let’s take the analogy further. In physical space, there’s a maximum speed of motion—the speed of light, c. So what about in branchial space? Well, in our models we can see that there’s also got to be a maximum speed of motion in branchial space. Or, in other words, there’s a maximum rate at which we can entangle with new quantum states.

In physical space we talk about light cones as being the regions that can be causally affected by some event at a particular location in space. In the same way, we can talk about entanglement cones that define regions in branchial space that can be affected by events at some position in branchial space. And just as there’s a causal graph that effectively knits together elementary light cones, there’s something similar that knits together entanglement cones.

That something similar is the multiway causal graph: a graph that represents causal relationships between all events that can happen anywhere in a multiway system. Here’s an example of a multiway causal graph for just a few steps of a very simple string substitution system—and it’s already pretty complicated:

LayeredGraphPlot
&#10005

 

LayeredGraphPlot[
 Graph[ResourceFunction["MultiwaySystem"][
   "WolframModel" ->  { {{x, y}, {x, z}} -> {{y, w}, {y, z}, {w, 
        x}}}, {{{0, 0}, {0, 0}}}, 6, "CausalGraphStructure"]]]

 

But in a sense the multiway causal graph is the most complete description of everything that can affect the experience of observers. Some of the causal relationships it describes represent spacelike connections; some represent branchlike connections. But all of them are there. And so in a sense the multiway causal graph is where relativity and quantum mechanics come together. Slice one way and you’ll see relationships in physical space; slice another way and you’ll see relationships in branchial space, between quantum states.

To help see how this works here’s a very toy version of a multiway causal graph:

Graph3D
&#10005

 

Graph3D[ResourceFunction["GeneralizedGridGraph"][{4 -> "Directed", 4, 
   4}, EdgeStyle -> {Darker[Blue], Darker[Blue], Purple}]]

 

Each point is an event that happens in some hypergraph on some branch of a multiway system. And now the graph records the causal relationship of that event to other ones. In this toy example, there are purely timelike relationships—indicated by arrows pointing down—in which basically some element of the hypergraph is affecting its future self. But then there are both spacelike and branchlike relationships, where the event affects elements that are either “spatially” separated in the hypergraph, or “branchially” separated in the multiway system.

But in all this complexity, there’s something wonderful that happens. As soon as the underlying rule has causal invariance, this implies all sorts of regularities in the multiway causal graph. And for example it tells us that all those causal graphs we get by taking different branchtime slices are actually the same when we project them into spacetime—and this is what leads to relativity.

But causal invariance has other consequences too. One of them is that there should be an analog of special relativity that applies not in spacetime but in branchtime. The reference frames of special relativity are now our quantum observation frames. And the analog of speed in physical space is the rate of entangling new quantum states.

So what about a phenomenon like relativistic time dilation? Is there an analog of that for motion in branchial space? Well, actually, yes there is. And it turns out to be what’s sometimes called the quantum Zeno effect: if you repeatedly measure a quantum system fast enough it won’t change. It’s a phenomenon that’s implied by the add-ons to the standard formalism of quantum mechanics that describe measurement. But in our models it just comes directly from the analogy between branchial and physical space.

Doing new measurements is equivalent to getting entangled with new quantum states—or to moving in branchial space. And in direct analogy to what happens in special relativity, as you get closer to moving at the maximum speed you inevitably sample things more slowly in time—and so you get time dilation, which means that your “quantum evolution” slows down.

OK, so there are relativistic phenomena in physical space, and quantum analogs in branchial space. But in our models these are all effectively facets of one thing: the multiway causal graph. So are there situations in which the two kinds of phenomena can mix? Normally there aren’t: relativistic phenomena involve large physical scales; quantum phenomena tend to involve small ones.

But one example of an extreme situation where they can mix is black holes. I’ve mentioned several times that the formation of an event horizon around a black hole is associated with disconnection in the causal graph. But it’s more than that. It’s actually disconnection not only in the spacetime causal graph, but in the full multiway causal graph. And that means that there’s not only an ordinary causal event horizon—in physical space—but also an “entanglement horizon” in branchial space. And just as a piece of the spatial hypergraph can get disconnected when there’s a black hole, so can a piece of the branchial graph.

What does this mean? There are a variety of consequences. One of them is that quantum information can be trapped inside the entanglement horizon even when it hasn’t crossed the causal event horizon—so that in effect the black hole is freezing quantum information “at its surface” (at least its surface in branchial space). It’s a weird phenomenon implied by our models, but what’s perhaps particularly interesting about it is that it’s very much aligned with conclusions about black holes that have emerged in some of the latest work in physics on the so-called holographic principle in quantum field theory and general relativity.

Here’s another related, weird phenomenon. If you pass the causal event horizon of a black hole, it’s an inevitable fact that you’ll eventually get infinitely physically elongated (or “spaghettified”) by tidal forces. Well, something similar happens if you pass the entanglement horizon—except now you’ll get elongated in branchial space rather than physical space. And in our models, this eventually means you won’t be able to make a quantum measurement—so in a sense as an observer you won’t be able to “form a classical thought”, or, in other words, beyond the entanglement horizon you’ll never be able to “come to a definite conclusion” about, for example, whether something fell into the black hole or didn’t.

The speed of light c is a fundamental physical constant that relates distance in physical space to time. In our models, there’s now a new fundamental physical constant: the maximum entanglement speed, that relates distance in branchial space to time. I call this maximum entanglement speed ζ (zeta) (ζ looks a bit like a “tangled c”). I’m not sure what its value is, but a possible estimate is that it corresponds to entangling about 10102 new quantum states per second. And in a sense the fact that this is so big is why we’re normally able to “form classical thoughts”.

Because of the relation between (multiway) causal edges and energy, it’s possible to convert ζ to units of energy per second, and our estimate then implies that ζ is about 105 solar masses per second. It’s a big value, although conceivably not irrelevant to something like a merger of galactic black holes. (And, yes, this would mean that for an intelligence to “quantum grok” our galaxy would take maybe six months.)

Finding the Ultimate Rule

I’m frankly amazed at how much we’ve been able to figure out just from the general structure of our models. But to get a final fundamental theory of physics we’ve still got to find a specific rule. A rule that gives us 3 (or so) dimensions of space, the particular expansion rate of the universe, the particular masses and properties of elementary particles, and so on. But how should we set about finding this rule?

And actually even before that, we need to ask: if we had the right rule, would we even know it? As I mentioned earlier, there’s potentially a big problem here with computational irreducibility. Because whatever the underlying rule is, our actual universe has applied it perhaps  times. And if there’s computational irreducibility—as there inevitably will be—then there won’t be a way to fundamentally reduce the amount of computational effort that’s needed to determine the outcome of all these rule applications.

But what we have to hope is that somehow—even though the complete evolution of the universe is computationally irreducible—there are still enough “tunnels of computational reducibility” that we’ll be able to figure out at least what’s needed to be able to compare with what we know in physics, without having to do all that computational work. And I have to say that our recent success in getting conclusions just from the general structure of our models makes me much more optimistic about this possibility.

But, OK, so what rules should we consider? The traditional approach in natural science (at least over the past few centuries) has tended to be: start from what you know about whatever system you’re studying, then try to “reverse engineer” what its rules are. But in our models there’s in a sense too much emergence for this to work. Look at something like this:

ResourceFunction
&#10005

 

ResourceFunction[
  "WolframModel"][{{1, 2, 2}, {2, 3, 4}} -> {{4, 3, 3}, {4, 1, 
    5}, {2, 4, 5}}, {{0, 0, 0}, {0, 0, 0}}, 500, "FinalStatePlot"]

 

Given the overall form of this structure, would you ever figure that it could be produced just by the rule:

{{xyy}, {yzu}}  {{u, z, z}, {uxv}, {yuv}}

RulePlot
&#10005

 

RulePlot[ResourceFunction[
   "WolframModel"][{{x, y, y}, {y, z, u}} -> {{u, z, z}, {u, x, 
     v}, {y, u, v}}]]

 

Having myself explored the computational universe of simple programs for some forty years, I have to say that even now it’s amazing how often I’m humbled by the ability of extremely simple rules to give behavior I never expected. And this is particularly common with the very structureless models we’re using here. So in the end the only real way to find out what can happen in these models is just to enumerate possible rules, and then run them and see what they do.

But now there’s a crucial question. If we just start enumerating very simple rules, how far are we going to have to go before we find our universe? Or, put another way, just how simple is the rule for our universe going to end up being?

It could have been that in a sense the rule for the universe would have a special case in it for every element of the universe—every particle, every position in space, etc. But the very fact that we’ve been able to find definite scientific laws—and that systematic physics has even been possible—suggests that the rule at least doesn’t have that level of complexity. But how simple might it be? We don’t know. And I have to say that I don’t think our recent discoveries shed any particular light on this—because they basically say that lots of things in physics are generic, and independent of the specifics of the underlying rule, however simple or complex it may be.

Why This Universe? The Relativity of Rules

But, OK, let’s say we find that our universe can be described by some particular rule. Then the obvious immediate question would be: why that rule, and not another? The history of science—certainly since Copernicus—has shown us over and over again evidence that we’re “not special”. But if the rule we find to describe our universe is simple, wouldn’t that simplicity be a sign of “specialness”?

I have long wondered about this. Could it for example be that the rule is only simple because of the way that we, as entities existing in our particular universe, choose to set up our ways of describing things? And that in some other universe, with some other rule, the entities that exist there would set up their ways of describing things so that the rule for their universe is simple to them, even though it might be very complex to us?

Or could it be that in some fundamental sense it doesn’t matter what the rules for the universe are: that to observers embedded in a universe, operating according to the same rules as that universe, the conclusions about how the universe works will always be the same?

Or could it be that this is a kind of question that’s just outside the realm of science?

To my considerable surprise, the paradigm that’s emerging from our recent discoveries potentially seems to suggest a definite—though at first seemingly bizarre—scientific answer.

In what we’ve discussed so far we’re imagining that there’s a particular, single rule for our universe, that gets applied over and over again, effectively in all possible ways. But what if there wasn’t just one rule that could be used? What if all conceivable rules could be used? What if every updating event could just use any possible rule? (Notice that in a finite universe, there are only ever finitely many rules that can ever apply.)

At first it might not seem as if this setup would ever lead to anything definite. But imagine making a multiway graph of absolutely everything that can happen—including all events for all possible rules. This is a big, complicated object. But far from being structureless, it’s full of all kinds of structure.

And there’s one very important thing about it: it’s basically guaranteed to have causal invariance (basically because if there’s a rule that does something, there’s always another rule somewhere that can undo it).

So now we can make a rule-space multiway causal graph—which will show a rule-space analog of relativity. And what this means is that in the rule-space multiway graph, we can expect to make different foliations, but have them all give consistent results.

It’s a remarkable conceptual unification. We’ve got physical space, branchial space, and now also what we can call rulial space (or just rule space). And the same overall ideas and principles apply to all of them. And just as we defined reference frames in physical space and branchial space, so also we can define reference frames in rulial space.

But what kinds of reference frames might observers set up in rulial space? In a typical case we can think of different reference frames in rulial space as corresponding to different description languages in which an observer can describe their experience of the universe.

In the abstract, it’s a familiar idea that given any particular description language, we can always explicitly program any universal computer to translate it to another description language. But what we’re saying here is that in rulial space it just takes choosing a different reference frame to have our representation of the universe use a different description language.

And roughly the reason this works is that different foliations of rulial space correspond to different choices of sequences of rules in the rule-space multiway graph—which can in effect be set up to “compute” the output that would be obtained with any given description language. That this can work ultimately depends on the fact that sequences of our rules can support universal computation (which the Principle of Computational Equivalence implies they ubiquitously will)—which is in effect why it only takes “choosing a different reference frame in rule space” to “run a different program” and get a different description of the observed behavior of the universe.

It’s a strange but rather appealing picture. The universe is effectively using all possible rules. But as entities embedded in the universe, we’re picking a particular foliation (or sequence of reference frames) to make sense of what’s happening. And that choice of foliation corresponds to a description language which gives us our particular way of describing the universe.

But what is there to say definitely about the universe—independent of the foliation? There’s one immediate thing: that the universe, whatever foliation one uses to describe it, is just a universal computer, and nothing more. And that hypercomputation is never possible in the universe.

But given the structure of our models, there’s more. Just like there’s a maximum speed in physical space (the speed of lightc), and a maximum speed in branchial space (the maximum entanglement speed ζ), so also there must be a maximum speed in rulial space, which we can call ρ—that’s effectively another fundamental constant of nature. (The constancy of ρ is in effect a reflection of the Principle of Computational Equivalence.)

But what does moving in rulial space correspond to? Basically it’s a change of rule. And to say that this can only happen at a finite speed is to say that there’s computational irreducibility: that one rule cannot emulate another infinitely fast. And given this finite “speed of emulation” there are “emulation cones” that are the analog of light cones, and that define how far one can get in rulial space in a certain amount of time.

What are the units of ρ? Essentially they are program length divided by time. But whereas in the theory of computation one typically imagines that program length can be scaled almost arbitrarily by different models of computation, here this is a measure of program length that’s somehow fundamentally anchored to the structure of the rule-space multiway system, and of physics. (By the way, there’ll be an analog of curvature and Einstein’s equations in rulial space too—and it probably corresponds to a geometrization of computational complexity theory and questions like P?=NP.)

There’s more to say about the structure of rulial space. For example, let’s imagine we try to make a foliation in which we freeze time somewhere in rulial space. That’ll correspond to trying to describe the universe using some computationally reducible model—and over time it’ll get more and more difficult to maintain this as emulation cones effectively deliver more and more computational irreducibility.

So what does all this mean for our original goal—of finding a rule to describe our universe? Basically it’s saying that any (computation universal) rule will do—if we’re prepared to craft the appropriate description language. But the point is that we’ve basically already defined at least some elements of our description language: they are the kinds of things our senses detect, our measuring devices measure, and our existing physics describes. So now our challenge is to find a rule that successfully describes our universe within this framework.

For me this is a very satisfactory solution to the mystery of why some particular rule would be picked for our universe. The answer is that there isn’t ultimately ever a particular rule; basically any rule capable of universal computation will do. It’s just that—with some particular mode of description that we choose to use—there will be some definite rule that describes our universe. And in a sense whatever specialness there is to this rule is just a reflection of the specialness of our mode of description. In effect, the only thing special about the universe to us is us ourselves.

And this suggests a definite answer to another longstanding question: could there be other universes? The answer in our setup is basically no. We can’t just “pick another rule and get another universe”. Because in a sense our universe already contains all possible rules, so there can only be one of it. (There could still be other universes that do various levels of hypercomputation.)

But there is something perhaps more bizarre that is possible. While we view our universe—and reality—through our particular type of description language, there are endless other possible description languages which can lead to descriptions of reality that will seem coherent (and even in some appropriate definition “meaningful”) within themselves, but which will seem to us to correspond to utterly incoherent and meaningless aspects of our universe.

I’ve always assumed that any entity that exists in our universe must at least “experience the same physics as us”. But now I realize that this isn’t true. There’s actually an almost infinite diversity of different ways to describe and experience our universe, or in effect an almost infinite diversity of different “planes of existence” for entities in the universe—corresponding to different possible reference frames in rulial space, all ultimately connected by universal computation and rule-space relativity.

The Challenge of Language Design for the Universe

What does it mean to make a model for the universe? If we just want to know what the universe does, well, then we have the universe, and we can just watch what it does. But when we talk about making a model, what we really mean is that we want to have a representation of the universe that somehow connects it to what we humans can understand. Given computational irreducibility, it’s not that we expect a model that will in any fundamental sense “predict in advance” the precise behavior of the universe down to every detail (like that I am writing this sentence now). But we do want to be able to point to the model—whose structure we understand—and then be able to say that this model corresponds to our universe.

In the previous section we said that we wanted to find a rule that we could in a sense connect with the description language that we use for the universe. But what should the description language for the rule itself be? Inevitably there is a great computational distance between the underlying rule and features of the universe that we’re used to describing. So—as I’ve said several times here in different ways—we can’t expect to use the ordinary concepts with which we describe the world (or physics) directly in the construction of the rule.

I’ve spent the better part of my life as a language designer, primarily building what’s now the full-scale computational language that is the Wolfram Language. And I now view the effort to find a fundamental theory of physics as in many ways just another challenge in language design—perhaps even the ultimate such challenge.

In designing a computational language what one is really trying to do is to create a bridge between two domains: the abstract world of what is possible to do computationally, and the “mental” world of what people understand and are interested in doing. There are all sorts of computational processes that one can invent (say running randomly picked cellular automaton rules), but the challenge in language design is to figure out which ones people care about at this point in human history, and then to give people a way to describe these.

Usually in computational language design one is leveraging human natural language—or the more formal languages that have been developed in mathematics and science—to find words or their analogs to refer to particular “lumps of computation”. But at least in the way I have done it, the essence of language design is to try to find the purest primitives that can be expressed this way.

OK, so let’s talk about setting up a model for the universe. Perhaps the single most important idea in my effort to find a fundamental theory of physics is that the theory should be based on the general computational paradigm (and not, for example, specifically on mathematics). So when we talk about having a language in which to describe our model of the universe we can see that it has to bridge three different domains. It has to be a language that humans can understand. It has to be a language that can express computational ideas. And it has to be a language that can actually represent the underlying structure of physics.

So what should this language be like? What kinds of primitives should it contain? The history that has led me to what I describe here is in many ways the history of my attempts to formulate an appropriate language. Is it trivalent graphs? Is it ordered graphs? Is it rules applied to abstract relations?

In many ways, we are inevitably skating at the edge of what humans can understand. Maybe one day we will have built up familiar ways of talking about the concepts that are involved. But for now, we don’t have these. And in a sense what has made this project feasible now is that we’ve come so far in developing ways to express computational ideas—and that through the Wolfram Language in particular those forms of expression have become familiar, at the very least to me.

And it’s certainly satisfying to see that the basic structure of the models we’re using can be expressed very cleanly and succinctly in the Wolfram Language. In fact, in what perhaps can be viewed as some sort of endorsement of the structure of the Wolfram Language, the models are in a sense just a quintessential example of transformation rules for symbolic expressions, which is exactly what the Wolfram Language is based on. But even though the structure is well represented in the Wolfram Language, the “use case” of “running the universe” is different from what the Wolfram Language is normally set up to do.

In the effort to serve what people normally want, the Wolfram Language is primarily about taking input, evaluating it by doing computation, and then generating output. But that’s not what the universe does. The universe in a sense had input at the very beginning, but now it’s just running an evaluation—and with all our different ideas of foliations and so on, we are sampling certain aspects of that ongoing evaluation.

It’s computation, but it’s computation sampled in a different way than we’ve been used to doing it. To a language designer like me, this is something interesting in its own right, with its own scientific and technological spinoffs. And perhaps it will take more ideas before we can finish the job of finding a way to represent a rule for fundamental physics.

But I’m optimistic that we actually already have pretty much all the ideas we need. And we also have a crucial piece of methodology that helps us: our ability to do explorations through computer experiments. If we based everything on the traditional methodology of mathematics, we would in effect only be able to explore what we somehow already understood. But in running computer experiments we are in effect sampling the raw computational universe of possibilities, without being limited by our existing understanding.

Of course, as with physical experiments, it matters how we define and think about our experiments, and in effect what description language we use. But what certainly helps me, at least, is that I’ve now been doing computer experiments for more than forty years, and over that time I’ve been able to slowly refine the art and science of how best to do them.

In a way it’s very much like how we learn from our experience in the physical world. From seeing the results of many experiments, we gradually build up intuition, which in turn lets us start creating a conceptual framework, which then informs the design of our language for describing things. One always has to keep doing experiments, though. In a sense computational irreducibility implies that there will always be surprises, and that’s certainly what I constantly find in practice, not least in this project.

Will we be able to bring together physics, computation and human understanding to deliver what we can reasonably consider to be a final, fundamental theory of physics? It is difficult to know how hard this will be. But I am extremely optimistic that we are finally on the right track, and may even have effectively already solved the fascinating problem of language design that this entails.

Let’s Go Find the Fundamental Theory!

OK, so given all this, what’s it going to take to find the fundamental theory of physics? The most important thing—about which I’m extremely excited—is that I think we’re finally on the right track. Of course, perhaps not surprisingly, it’s still technically difficult. Part of that difficulty comes directly from computational irreducibility and from the difficulty of working out the consequences of underlying rules. But part of the difficulty also comes from the very success and sophistication of existing physics.

In the end our goal must be to build a bridge that connects our models to existing knowledge about physics. And there is difficult work to do on both sides. Trying to frame the consequences of our models in terms that align with existing physics, and trying to frame the (usually mathematical) structures of existing physics in terms that align with our models.

For me, one of the most satisfying aspects of our discoveries over the past couple of months has been the extent to which they end up resonating with a huge range of existing—sometimes so far seemingly “just mathematical”—directions that have been taken in physics in recent years. It almost seems like everyone has been right all along, and it just takes adding a new substrate to see how it all fits together. There are hints of string theory, holographic principles, causal set theory, loop quantum gravity, twistor theory, and much more. And not only that, there are also modern mathematical ideas—geometric group theory, higher-order category theory, non-commutative geometry, geometric complexity theory, etc.—that seem so well aligned that one might almost think they must have been built to inform the analysis of our models.

I have to say I didn’t expect this. The ideas and methods on which our models are based are very different from what’s ever been seriously pursued in physics, or really even in mathematics. But somehow—and I think it’s a good sign all around—what’s emerged is something that aligns wonderfully with lots of recent work in physics and mathematics. The foundations and motivating ideas are different, but the methods (and sometimes even the results) often look to be quite immediately applicable.

There’s something else I didn’t expect, but that’s very important. In studying things (like cellular automata) out in the computational universe of simple programs, I have normally found that computational irreducibility—and phenomena like undecidability—are everywhere. Try using sophisticated methods from mathematics; they will almost always fail. It is as if one hits the wall of irreducibility almost immediately, so there is almost nothing for our sophisticated methods, which ultimately rely on reducibility, to do.

But perhaps because they are so minimal and so structureless our models for fundamental physics don’t seem to work this way. Yes, there is computational irreducibility, and it’s surely important, both in principle and in practice. But the surprising thing is that there’s a remarkable depth of richness before one hits irreducibility. And indeed that’s where many of our recent discoveries come from. And it’s also where existing methods from physics and mathematics have the potential to make great contributions. But what’s important is that it’s realistic that they can; there’s a lot one can understand before one hits computational irreducibility. (Which is, by the way, presumably why we are fundamentally able to form a coherent view of physical reality at all.)

So how is the effort to try to find a fundamental theory of physics going to work in practice? We plan to have a centralized effort that will push forward with the project using essentially the same R&D methods that we’ve developed at Wolfram Research over the past three decades, and that have successfully brought us so much technology—not to mention what exists of this project so far. But we plan to do everything in a completely open way. We’ve already posted the full suite of software tools that we’ve developed, along with nearly a thousand archived working notebooks going back to the 1990s, and soon more than 400 hours of videos of recent working sessions.

We want to make it as easy for people to get involved as possible, whether directly in our centralized effort, or in separate efforts of their own. We’ll be livestreaming what we do, and soliciting as much interaction as possible. We’ll be running a variety of educational programs. And we also plan to have (livestreamed) working sessions with other individuals and groups, as well as providing channels for the computational publishing of results and intermediate findings.

I have to say that for me, working on this project both now and in past years has been tremendously exciting, satisfying, and really just fun. And I’m hoping many other people will be able to share in this as the project goes forward. I think we’ve finally got a path to finding the fundamental theory of physics. Now let’s go follow that path. Let’s have a blast. And let’s try to make this the time in human history when we finally figure out how this universe of ours works!

 

Indian expat faces jail for insulting Islam in UAE

 

UAE-based Indian expat sacked, faces jail for insulting Islam

UAE-based Indian expat sacked, faces jail for insulting Islam IANS   ¦    Apr 10, 2020 03:44:58 PM (IST) Dubai: An Indian expat in the United Arab Emirates (UAE) was facing police action after being fired for insulting Islam on social media, a media report said. Rakesh B. Kitturmath, who worked as a team leader at Emrill Services, an integrated facilities management (FM) headquartered in Dubai, was sacked on Thursday, the company’s CEO told Gulf News. “Kitturmath’s employment stands terminated with immediate effect. He will be handed over to Dubai Police. We have a zero-tolerance policy towards such hate crimes,” said Stuart Harrison, CEO of Emrill Services. Kitturmath sparked outrage on social media after he ridiculed Muslim worshippers in response to a Facebook post on the coronavirus pandemic. A screenshot of Kitturmath’s response went viral on social media with hundreds demanding his dismissal and arrest. Harrison told Gulf News that they were trying to find out if Kitturmath was still in the UAE. “We have over 8,500 employees so this may take a while. That said, we have fired him. If he’s still in the country, he will be handed over to Dubai Police,” he said. Originally from Ranebennuri, Karnataka, Kitturmath joins a growing list of Indian expats who have landed in trouble for Islamophobic messages in recent days. Earlier this week, Abu Dhabi resident Mitesh Udeshi was sacked for posting a cartoon mocking Islam on his Facebook page, while a police complaint was filed against Sameer Bhandari in Dubai after he asked a Muslim job seeker from India to go back to Pakistan.

COVID-19 Myths and Facts

COVID-19 virus can be transmitted in areas with hot and humid climates

From the evidence so far, the COVID-19 virus can be transmitted in ALL AREAS, including areas with hot and humid weather. Regardless of climate, adopt protective measures if you live in, or travel to an area reporting COVID-19. The best way to protect yourself against COVID-19 is by frequently cleaning your hands. By doing this you eliminate viruses that may be on your hands and avoid infection that could occur by then touching your eyes, mouth, and nose.

 

FACT: Coronavirus transmission in hot and humid climates

 

 

Download and share graphic

Cold weather and snow CANNOT kill the new coronavirus.

There is no reason to believe that cold weather can kill the new coronavirus or other diseases. The normal human body temperature remains around 36.5°C to 37°C, regardless of the external temperature or weather. The most effective way to protect yourself against the new coronavirus is by frequently cleaning your hands with alcohol-based hand rub or washing them with soap and water.

MB_cold_snow

 

 

Download and share the graphic

Taking a hot bath does not prevent the new coronavirus disease

Taking a hot bath will not prevent you from catching COVID-19. Your normal body temperature remains around 36.5°C to 37°C, regardless of the temperature of your bath or shower. Actually, taking a hot bath with extremely hot water can be harmful, as it can burn you. The best way to protect yourself against COVID-19 is by frequently cleaning your hands. By doing this you eliminate viruses that may be on your hands and avoid infection that could occur by then touching your eyes, mouth, and nose.

MB_hot bath

 

 

Download and share the graphic

The new coronavirus CANNOT be transmitted through mosquito bites.

To date there has been no information nor evidence to suggest that the new coronavirus could be transmitted by mosquitoes. The new coronavirus is a respiratory virus which spreads primarily through droplets generated when an infected person coughs or sneezes, or through droplets of saliva or discharge from the nose. To protect yourself, clean your hands frequently with an alcohol-based hand rub or wash them with soap and water. Also, avoid close contact with anyone who is coughing and sneezing.

MB_mosquito bite

 

 

Download and share the graphic

Are hand dryers effective in killing the new coronavirus?

No. Hand dryers are not effective in killing the 2019-nCoV. To protect yourself against the new coronavirus, you should frequently clean your hands with an alcohol-based hand rub or wash them with soap and water. Once your hands are cleaned, you should dry them thoroughly by using paper towels or a warm air dryer.

mythbusters-27

 

 

Download and share the graphic

Can an ultraviolet disinfection lamp kill the new coronavirus?

UV lamps should not be used to sterilize hands or other areas of skin as UV radiation can cause skin irritation.

mythbusters-31

 

 

Download and share the graphic

How effective are thermal scanners in detecting people infected with the new coronavirus?

Thermal scanners are effective in detecting people who have developed a fever (i.e. have a higher than normal body temperature) because of infection with the new coronavirus.

However, they cannot detect people who are infected but are not yet sick with fever. This is because it takes between 2 and 10 days before people who are infected become sick and develop a fever.

mythbusters-25

 

 

Download and share the graphic

Can spraying alcohol or chlorine all over your body kill the new coronavirus?

No. Spraying alcohol or chlorine all over your body will not kill viruses that have already entered your body. Spraying such substances can be harmful to clothes or mucous membranes (i.e. eyes, mouth). Be aware that both alcohol and chlorine can be useful to disinfect surfaces, but they need to be used under appropriate recommendations.

mythbusters-33

 

 

Download and share the graphic

Do vaccines against pneumonia protect you against the new coronavirus?

No. Vaccines against pneumonia, such as pneumococcal vaccine and Haemophilus influenza type B (Hib) vaccine, do not provide protection against the new coronavirus.

The virus is so new and different that it needs its own vaccine. Researchers are trying to develop a vaccine against 2019-nCoV, and WHO is supporting their efforts.

Although these vaccines are not effective against 2019-nCoV, vaccination against respiratory illnesses is highly recommended to protect your health.

11

 

 

Download and share the graphic

Can regularly rinsing your nose with saline help prevent infection with the new coronavirus?

No. There is no evidence that regularly rinsing the nose with saline has protected people from infection with the new coronavirus. 

There is some limited evidence that regularly rinsing nose with saline can help people recover more quickly from the common cold. However, regularly rinsing the nose has not been shown to prevent respiratory infections.

23

 

 

Download and share the graphic

Can eating garlic help prevent infection with the new coronavirus?

Garlic is a healthy food that may have some antimicrobial properties. However, there is no evidence from the current outbreak that eating garlic has protected people from the new coronavirus.

19

 

 

Download and share the graphic

Does the new coronavirus affect older people, or are younger people also susceptible?

People of all ages can be infected by the new coronavirus (2019-nCoV). Older people, and people with pre-existing medical conditions (such as asthma, diabetes, heart disease) appear to be more vulnerable to becoming severely ill with the virus. 

WHO advises people of all ages to take steps to protect themselves from the virus, for example by following good hand hygiene and good respiratory hygiene.

mythbuster-2

 

 

Download and share the graphic

Are antibiotics effective in preventing and treating the new coronavirus?

No, antibiotics do not work against viruses, only bacteria.

The new coronavirus (2019-nCoV) is a virus and, therefore, antibiotics should not be used as a means of prevention or treatment.

However, if you are hospitalized for the 2019-nCoV, you may receive antibiotics because bacterial co-infection is possible.

mythbuster-3

 

 

Download and share the graphic

Are there any specific medicines to prevent or treat the new coronavirus?

To date, there is no specific medicine recommended to prevent or treat the new coronavirus (2019-nCoV).

However, those infected with the virus should receive appropriate care to relieve and treat symptoms, and those with severe illness should receive optimized supportive care. Some specific treatments are under investigation, and will be tested through clinical trials. WHO is helping to accelerate research and development efforts with a range or partners.

mythbuster-4

 

 

Download and share the graphic

Rolling updates on coronavirus disease (COVID-19)

Rolling updates on coronavirus disease (COVID-19)

Updated 23 March 2020

 
Credits
 
 

Summary

 

• A pneumonia of unknown cause detected in Wuhan, China was first reported to the WHO Country Office in China on 31 December 2019.

• WHO is working 24/7 to analyse data, provide advice, coordinate with partners, help countries prepare, increase supplies and manage expert networks.

• The outbreak was declared a Public Health Emergency of International Concern on 30 January 2020.

• The international community has asked for US$675 million to help protect states with weaker health systems as part of its Strategic Preparedness and Response Plan.

• On 11 February 2020, WHO announced a name for the new coronavirus disease: COVID-19.

• To stay up to date, follow @DrTedros and @WHO on Twitter, read WHO’s daily situation reports and news releases, and watch our regular press conferences.

 

WHO and FIFA team up on campaign to kick out coronavirus

23 March 2020

WHO and FIFA launched the “Pass the message to kick out coronavirus” campaign, led by world-renowned footballers. The campaign promotes five key steps for people to follow to protect their health in line with WHO guidance, focused on hand washing, coughing etiquette, not touching your face, physical distance and staying home if feeling unwell.

“We need teamwork to combat the coronavirus,” said FIFA President Gianni Infantino. “FIFA has teamed up with WHO because health comes first. I call upon the football community worldwide to join us in supporting this campaign to pass the message even further. Some of the greatest players to have played the beautiful game have put their names to the campaign and are united in their desire to pass the message to kick out COVID-19.”

Twenty-eight players are involved in the video campaign, which is being published in 13 languages.

 

Tedros Adhanom Ghebreyesus

 

@DrTedros

 
 

Pleased to launch “Pass the message: 5 steps to kicking out ” campaign, together with @FIFAcom Gianni Infantino & @Alissonbecker. I thank them for their active involvement in passing the message against the pandemic since the very beginning!

 

Embedded video

 
901 people are talking about this

 

 

Key materials:

Video: Pass the message: Five steps to kicking out coronavirus

News release: Pass the message: Five steps to kicking out coronavirus – with social tiles

WHO Director-General’s opening remarks at the media briefing on COVID-19 – 23 March 2020

Youtube recording of media briefing

Video clips for broadcasters 

WHO Health Alert for coronavirus launches on WhatsApp 

20 March 2020

To increase access to reliable information, WHO worked with WhatsApp and Facebook to launch a new WHO Health Alert messaging service today. The WhatsApp-based service will provide vital information about COVID-19 to millions of people through their mobile phones. The services uses an AI chatbot to provide updated information on the pandemic, including how to protect yourself, questions and answers, and the latest news and press coverage. The Health Alert service is now available in English and will be introduced in other languages next week. This is part of WHO’s wider initiative to work with technology companies to get accurate health information into the hands of people that need it at this critical time.

 

Tedros Adhanom Ghebreyesus

 

@DrTedros

 
 

I am proud to announce that today we launched a new @WHO Health Alert messaging service via @WhatsApp. This service will provide the latest news & information on , including details on symptoms and how to protect yourself. To subscribe, click here http://bit.ly/who-covid19-whatsapp 

 
4,731 people are talking about this

 

Key materials:

News release 

Link to receive messages from the WHO Health Alert on WhatsApp

 

Young people “are not invincible” 

20 March 2020 

Speaking at the COVID-19 media briefing, the Director-General said: 

“Although older people are the hardest hit, younger people are not spared.

Data from many countries clearly show that people under 50 make up a significant proportion of patients requiring hospitalization.

Today, I have a message for young people: you are not invincible. This virus could put you in hospital for weeks, or even kill you.

Even if you don’t get sick, the choices you make about where you go could be the difference between life and death for someone else.

I’m grateful that so many young people are spreading the word and not the virus.”

 

Tedros Adhanom Ghebreyesus

 

@DrTedros

 
 

Young people are not invincible from . The could put you in hospital for weeks, or even kill you. Even if you don’t get sick, the choices you make about where you go could be the difference between life and death for someone else. https://twitter.com/WHO/status/1241049912017588225?s=20 

 

Embedded video

World Health Organization (WHO)

 

@WHO

Replying to @WHO

“I’m grateful that so many young people are spreading the word & not the virus. Solidarity is the key to defeating #COVID19 – solidarity between countries, but also between age groups.
Thank you for heeding our call for solidarity, solidarity, solidarity”-@DrTedros #coronavirus

 
2,016 people are talking about this

 

 

Key materials:

WHO Director-General’s opening remarks at the media briefing on COVID-19 – 20 March 2020

Youtube recording of media briefing

Video clips for broadcasters

 

#AskWHO on disability considerations during COVID-19 

19 March 2020 

The impact of COVID-19 is “felt by different groups in different ways”. 

Expert Lindsay Lee emphasises that everyone has a critical role to play to protect people with disability during the COVID-19, in her #AskWHO public Q&A session. 

 

World Health Organization (WHO)

 

@WHO

 
 

on disability considerations during . https://www.pscp.tv/w/cUIJiTI2MTAyMHwxa3ZLcFhwdmx2a0tF3Og-3wD1ha8JMU77kvr3NHxRwkCF0kvOjIbMLC-KLlE= 

Who @WHO

#AskWHO on disability considerations during #COVID19. #coronavirus

pscp.tv

 
1,315 people are talking about this

 

 

Key materials:

Periscope recording of the #AskWHO on disability considerations during COVID-19 

 

 

UN Secretary-General calls for solidarity, hope and political will

19 March 2020 

The coronavirus pandemic is a crisis unlike any in the UN’s 75-year history. 

During his press briefing on COVID-19, UN Secretary-General António Guterres asked world leaders to come together and offer an urgent and coordinated global response.

 

Key materials:

YouTube recording of the press briefing 

 

 

WHO Regional Office for Africa holds joint COVID-19 media briefing with World Economic Forum 

19 March 2020 

Speakers included Dr Matshidiso Moeti, WHO Regional Director for Africa and WHO Country Representatives Dr Lucile Imboua-Niava (Senegal) and Dr Owen Kaluwa (South Africa). 

Many questions remain about how the pandemic will evolve in Africa. Of particular concern is the potential vulnerability of the roughly 26 million people living with HIV, and the 58 million children with malnutrition on the continent. 

 

 

Dr Matshidiso Moeti

 

@MoetiTshidi

 
 

briefing moderated by @wef earlier today. This is one of the biggest health challenges Africa has faced in a generation. Adopting approaches which are adaptable to the African context is key to containing the spread.

 

Embedded video

 
459 people are talking about this

 

 

Key materials 

 

 

Launch of SOLIDARITY trial 

18 March 2020

WHO and partners are launching an international clinical trial that aims to generate robust data from around the world to find the most effective treatments for COVID-19. The SOLIDARITY trial provides simplified procedures to enable even overloaded hospitals to participate. 

 

Soumya Swaminathan

 

@doctorsoumya

 
 

Research and development is an important part of our response @WHO https://www.who.int/dg/speeches/detail/who-director-general-s-opening-remarks-at-the-media-briefing-on-covid-19—18-march-2020 

WHO Director-General’s opening remarks at the media briefing on COVID-19 – 18 March 2020

who.int

 
49 people are talking about this

 

Key materials:

WHO Director-General’s opening remarks at the media briefing on COVID-19 – 18 March 2020

Periscope recording of the media briefing

Video clips for broadcasters

 

 

More than 320 000 learners enrol in online COVID-19 courses 

18 March 2020

Real-time training during global emergencies is critical for effective preparedness and response. 

The OpenWHO Massive Online Open Courses for COVID-19 provide learning resources for health professionals, decision-makers and the public. More than 320 000 learners have already enrolled.

As the pandemic continues to evolve, new resources will be added, additional language versions will continue to be rolled out, and existing courses will be updated to best reflect the changing context.

Courses include: 
Operational Planning Guidelines to Support Country Preparedness and Response

Infection Prevention and Control

Acute Respiratory Infections (ARIs) and basic hygiene measures to protect against infection

Clinical Care Severe Acute Respiratory Infection

Emerging respiratory viruses, including COVID-19: methods for detection, prevention, response and control

 

Tedros Adhanom Ghebreyesus

 

@DrTedros

 
 

The Massive Online Open Course at http://OpenWHO.org  has now over 323,000 enrollments for courses. This platform was developed in 2017 as part of pandemic preparedness together with the “managing epidemic” handbook. https://openwho.org/channels/covid-19 

Welcome to OpenWHO

OpenWHO is WHO’s new interactive, web-based, knowledge-transfer platform offering online courses to improve the response to health emergencies. OpenWHO enables the Organization and its key partners…

openwho.org

 
567 people are talking about this

 

More information: 

Managing epidemics: key facts about major deadly diseases

 

WHO calls for urgent, aggressive actions to combat COVID-19, as cases soar in South-East Asia Region

17 March 2020

The World Health Organization today called on Member states in South-East Asia Region to urgently scale-up aggressive measures to combat COVID-19, as confirmed cases cross 480, and the disease claims eight lives.

Key Materials

Press release

 

New guidance on people affected by humanitarian crises 

17 March 2020

To avoid the neglect and stigmatization of individuals in groups such as asylum seekers, internally displaced people and refugees, this interim guidance outlines ‘Scaling-up COVID-19 Outbreak in Readiness and Response Operations in Camps and Camp-like Settings’. 

WHO jointly developed the guidance with the International Federation of Red Cross and Red Crescent Societies (IFRC), International Organization for Migration (IOM) and the UN Refugee Agency (UNHCR). 

 

World Health Organization (WHO)

 

@WHO

 
 

NEW: guidance on preparedness & response in humanitarian situations incl. in camp & camp-like settings https://bit.ly/2Wv68oY 

 

Embedded video

 
839 people are talking about this

 

Key materials: 

 

#TogetherAtHome online concert series starts 

16 March 2020 

Chris Martin played a mini gig at home to kick off #TogetherAtHome, a virtual no-contact concert series that aims to promote physical distancing and taking action for global health, presented by WHO and Global Citizen. More Solidarity Sessions are planned to promote health, to show support for people who are staying at home to protect themselves and others from COVID-19, and to encourage donations to the COVID-19 Solidarity Response Fund.

 

 
 
 
View this post on Instagram

Chris played a mini gig at home earlier today on IG Live. @glblctzn @WHO @JohnLegend #TogetherAtHome

A post shared by Coldplay🌙☀️ (@coldplay) on 

 

 

Key materials:

Instagram video of Coldplay #TogetherAtHome session

COVID-19 Solidarity Response Fund

 

 

“You cannot fight a fire blindfolded.” 

16 March 2020

Countries should test every suspected case of COVID-19.

If people test positive, they should be isolated and the people they have been in close contact with up to 2 days before they developed symptoms should be sought out, and those people should be tested too if they show symptoms of COVID-19. 

WHO also advises that all confirmed cases, even mild cases, should be isolated in health facilities, to prevent transmission and provide adequate care.

But we recognize that many countries have already exceeded their capacity to care for mild cases in dedicated health facilities.

In that situation, countries should prioritize older patients and those with underlying conditions.

 

World Health Organization (WHO)

 

@WHO

 
 

“We have a simple message for all countries:
test
test
test.

Test every suspected case.

If they test positive, isolate them & find out who they have been in close contact with up to 2 days before they developed symptoms & test those people too”-@DrTedros

 
22.9K people are talking about this

 

Key materials:

WHO Director-General’s opening remarks at the media briefing on COVID-19 – 16 March 2020

Full video of Press Conference – 16 March 2020

Video clips for broadcasters

 

 

 

WHO Mission to Iraq covers detection and response 

15 March 2020

The mission, which comprised experts from the Eastern Mediterranean Regional Office and from WHO headquarters in Geneva, held a series of meetings with national health authorities to identify the disease detection dynamics and at-risk populations, in addition to providing guidance on strengthening response and control measures.

The mission also reviewed the Ministry’s overall readiness to deal with a potential increase in case reporting and the priority of establishing an Emergency Operation Centre to speed up action now that the disease has been announced as a global pandemic.

 

WHO Iraq

 

@WHOIraq

 
 

Head of @WHOEMRO team of experts Dr. Rana Hajja held a joint press conference with the MOH minister tdy. Dr Hajjih stressed the collective responsibility for containing the spread of the virus by protecting ourselves, families, and the community. https://bit.ly/33dwKvS 

View image on Twitter
 
See WHO Iraq’s other Tweets

 

 

Key materials:

News release  

Video clip from MOH and mission press conference 

 

Launch of #SafeHands Challenge 

13 March 2020 

WHO launched the #SafeHands Challenge to promote the power of clean hands fo fight the coronavirus.

To support the challenge to encourage people to clean their hands with soap or alcohol-based hand rub, Twitter created a new #HandWashing emoji. 

Heads of State, footballers, singers and more have already taken part, with more people nominated to join the challenge every day. 

 

 

Tedros Adhanom Ghebreyesus

 

@DrTedros

 
 

There are several measures you can take to protect yourself from . One of the most important ones is regular & safe hand hygiene. Here are the steps recommended by @WHO 👇
Show the 🌍 where and how you wash your hands. Join the WHO challenge!

 

Embedded video

 
4,422 people are talking about this

 

 

Europe becomes epicenter of the pandemic

13 March 2020

Europe now has more reported cases and deaths than the rest of the world combined, apart from China.

More cases are now being reported every day than were reported in China at the height of its epidemic.

 

Tedros Adhanom Ghebreyesus

 

@DrTedros

 
 

5,000 people have lost their lives to – this is a tragic milestone.
Europe has now become the epicenter of the pandemic, with more reported cases and deaths than the rest of the world combined, apart from China. https://twitter.com/WHO/status/1238493078756798464 

World Health Organization (WHO)

 

@WHO

 

Media briefing on #COVID19 with @DrTedros. #coronavirus https://www.pscp.tv/w/cTpSLjI2MTAyMHwxTHl4Qk5sWk9BeXhOII8cGgvkuf7TSY7b9Ssr0WoG-dD1Wwrre_lIiIoKG5g= 

 
859 people are talking about this

 

Key materials:

WHO Director-General’s opening remarks at the media briefing on COVID-19 – 13 March 2020

Periscope reading of press briefing

Video for broadcasters 

 

Updated clinical guidance 

13 March 2020

 

World Health Organization (WHO)

 

@WHO

 
 

Updated clinical guidance covers:
🔹Early case recognition
🔹Guidance for care of children, pregnant women, adults & older people
🔹Managing cases
🔹Infection prevention & control
🔹Sample collection & an update on investigational therapeutics
👉http://bit.ly/2QhIkkB 

View image on Twitter
 
2,141 people are talking about this