Different experiments provide different information about physical constants. Making it easy to submit experimental data so that the uncertainty around fundamental constant’s estimations would allow us to aggregate data of multiple studies, and with it validate different experiments against each other, and better keep track of different constants.
Goal
Build accurate fundamental constant estimates through the aggregation of different experiment, using an MLE methodology
Modelling the likelihood
The main problem when trying to get the likelihood of an “implicit” variable is the unusual transformations through which it might go. For instance, we might try to understand the distribution of a random variable by an experiment which is only affected by . While informative, this measurement will not allow us to learn anything new about that is not periodic. One way of modelling the likelihood function for a given experiment is to apply equation learning where the input is a value of , and the output is the likelihood of observing this value. The exact methodology needs more work, but it will probably apply the results of Discovering Equations using Machine Learning.
Rough idea
- A dataset exists of collected data for an experiment
- Models can be added to the dataset. These consist of equations where we map variables to
- Unobserved variables specific to the experiment
- Fundamental constants of the universe
- Measurements based on unobserved variables + noise/errors
- Based on the models we can use Monte Carlo simulations to get a joint density distribution for the various variables
- We can compound the distributions of the various experiments in order to get a stronger/more finely tuned measurement of the underlying variables
- It is unclear what is the best way to describe the probability distributions. A key resource here could be SciKit’s models when displaying continuous marginal distribution histograms
- The key value add is to see
- Evolution of the uncertainty around a physical constant over time:
- WITH references to the key experiments/papers that reduced the uncertainty.
- We can Score papers based on the information increase ( entropy ratio of the continuous variable before/after the experiment )
- Joint distributions of the physical constant value. I need to better understand how to use marginal distributions to feed into the joint distribution.
- Evolution of the uncertainty around a physical constant over time:
User experience
- Users can
- check different constants
- see the estimated values over time
- see which experiments provide the most information on each constant.
- Users can add a constant to the system ( name + definition )
- Users can submit experiments, containing
- The formula tested in the experiment
- The relevant constants used
- The value of any fixed parameters, including measurement noise terms for any measured variables. The noise can be parameterized through bias, etc, whose variables will not be estimated.