Why is running an equilibrium experiment such a bad idea?

At the conclusion of a sedimentation velocity (SV) experiment, the system reaches transport equilibrium, a state in which the opposing processes of sedimentation and diffusion are balanced. At this point, there is no net solute flux, and the concentration profile becomes time-invariant. The radial concentration distribution that ultimately forms reflects a balance between the centrifugal force driving sedimentation towards the bottom of the cell, and the diffusion flux stemming from the concentration gradient at the bottom of the cell, which opposes the sedimention. At equilibrium, the concentration profile follows an exponential function with respect to radial position. The shape of the exponent primarily depends on the molar mass and the partial specific volume of the analyte, and the rotor speed.

Sedimentation quilibrium (SE) experiments gained a lot of favor because a) they are easy to analyze by linearizing a simple exponential, and b) because they provide access to KD values of self-associating oligomerizations. However, it turns out there is remarkably low information content in equilibrium experiments. For example, in an experiment where the monomer-dimer equilibrium KD is measured, SV experiments can achieve ~25 fold improvement in the confidence intervals compared to SE experiments for the KD value. Furthermore, anisotropy information for the monomer and dimer is available, as well as kinetic information, at least for slowly equilibrating systems which react on the time scale of the sedimentation transport. For more information please review Demeler B, Brookes E, Wang R, Schirf V, Kim CA. Characterization of Reversible Associations by Sedimentation Velocity with UltraScan. Macromol. Biosci. Macromol Biosci. 2010 Jul 7;10(7):775-82.PMID: 20486142.

The limited confidence of SE results stems from two factors: sparse data and mathematical ill‑conditioning. In sedimentation equilibrium, each solute species theoretically generates its own exponential concentration profile, yet the instrument detects only the weight‑averaged sum of all exponentials. When the sample is heterogeneous, the analysis must disentangle several overlapping exponentials whose composite still resembles a single exponential—an inherently unstable deconvolution problem. By contrast, sedimentation velocity separates species in real time: distinct boundaries migrate at different rates, making individual populations easier to resolve. In SE, however, all species ultimately converge into one radial profile, compressing information into a single scan, amplifying uncertainty.

What’s more problematic is the stark difference in data richness between SV and SE experiments. In sedimentation velocity (SV), hundreds of scans are collected over time, each capturing different stages of boundary evolution. These scans contain unique, independent information and can be globally fit using whole boundary modeling, greatly increasing the signal-to-noise ratio and confidence in the results. In contrast, sedimentation equilibrium (SE) requires waiting—often many hours—until true equilibrium is reached. But once it is, only a single scan taken at the end of the experiment actually matters, because all subsequent scans are effectively identical. Besides a marginal improvement in signal-to-noise from duplicate scans, taking multiple scans adds no additional information, since the profile has stopped evolving. This reduces your entire experiment to essentially a single dataset, versus the hundreds available in SV, significantly limiting resolution, confidence, and information content.

It gets worse: Since it typically takes a lot of time to equilibrate sedimentation with diffusion transport, especially for slow diffusing solutes, experimentalist take another shortcut. They reduce the column length to about 3 mm to get to the equilibrium point more quickly. That of course reduces the number of datapoints that can be collected even more to about 1/4th of a single velocity scan. Actually, it is less than that, because the dynamic range of the detector is limited by the total absorbance (if using absorbance optics) to about 1.2 OD for most wavelengths or below, and a certain steepness, before the solute gradient is so steep it acts as a irefractive lens and diffracts the light so the recorded radial position is not where it is recorded, falsifying results further, and of course, reducing the useful number of datapoints even further. So now we have about 1/5th or 1/6th of the data measured in a single velocity scan. To help themselves with the low data point count and low information content, experimentalists started using multiple speeds and fitting them globally (something that can of course also be done in SV experiments). The problem with that is that it further lengthens the experiment, giving the protein more time to degrade or aggregate.

This brings up the next problem: Since an SE experiment requires a balance of sedimentation and diffusion to occur before the models are valid, one needs to wait until this gradient is established. But the gradient at the bottom of the cell gets to be quite large, so the concentration at the bottom may be so high, that the protein tends to aggregate and simply drops out of solution once the gradient is a certain height. At this point, conservation of mass is no longer satisfied, and the overall concentration keeps reducing and reducing, while the equilibrium gradient is constantly trying to re-establish itself. This process continues until everything is pretty much aggregated and equilibrium is actually never reached (but still attempted to be fitted in lots of publications that basically show useless results). For the same reason, equilibrium analysis fails when it comes to the detection of aggregates or contaminants. They will either not be seen at all or simply distort the "expected" model. Not good.

There are even more problems: Since there is no time-variant information in a single scan, time-invariant noise cannot be determined. That means one must collect absorbance instead of intensity data to at least extract some of the time invariant noise, though this leaves scratches and dirt on cell windows as a time-invariant noise contribution that cannot be removed. And of course the dreaded increase in stochastic noise by a factor of ~1.4 resulting from the subtraction of the reference scan degrades the available data even further.

The exponential gradient that is formed at the bottom of the cell is the only signal that holds any information about your sample. If you run too slow, the gradient is so shallow you may as well fit it with a straight line, which means that there is no way to distinguish any heterogeneity in your sample. Running at a higher speed causes the gradient to become steeper. BUT: The steeper the gradient, the worse the refractive artifacts are, and the more your radial position detection is distorted. If you already have a very limited amount of data to analyze, then any distortion of your data is intolerable.

Are SE experiments totally useless? No, you can still choose to run until equilibrium is reached and include the last scan in your global fit of the velocity data, although it won't make much of a difference. So why is the literature full of SE experiments? I guess the reason is that it is simple to fit an exponential. You don't even need a computer to do that. No finite element solutions of the Lamm equation are needed, no large computers for high-resolution fitting are needed. The price that is paid when running SE experiments is loss and lack of information, lack of precision, long instrument times, and low information content, which adds up to useless results when compared to velocity experiments.

Summary: OK, so we can all agree that SE experiments are basically useless. However, every rule has an exception, and there is one really important one where equilibrium shines:
ABDE experiments. ABDE stands for Analytical Buoyant Density Equilibrium experiments. These experiments are extremely useful. However, they don't ask the same questions as traditional AUC-SE experiments, but they are worth gold. Here is a classic example where AUC equilibrium wasn't only used in the right way, it also was used to make a fundamental discovery in biology in the most elegant way I could think of, see a favorite paper of mine: Matthew Meselson and Franklin W. Stahl. THE REPLICATION OF DNA IN ESCHERICHIA COLI. Proc Natl Acad Sci U S A. 1958 Jul 15; 44(7): 671–682).

This ABDE approach today is used routinely to study viral vector cargo loading (such as AAV), with extremely high sensitivity and therefore very low sample consumption, about 20-30 times lower than in a sedimentation velocity experiment. You can read about it in this publication from our lab: Henrickson A, Ding X, Seal AG, Qu Z, Tomlinson L, Forsey J, Gradinaru V, Oka K, Demeler B. Characterization and quantification of adeno-associated virus capsid-loading states by multi-wavelength analytical ultracentrifugation with UltraScan. Nanomedicine (Lond). 2023 Sep;18(22):1519-1534. (and many others who have published on this topic).

Design your experiments as SV experiments with this information in mind and make the most out of your expensive samples and instrument.