If folks are interested, I recently published a paper [1] demonstrating that fMRI activity in the visual cortex is remarkably high-dimensional!
Specifically, using a linear approach (like PCA, but slightly fancier), we find that stimulus-related information is present up along many, many dimensions of the neural response---much more than previously expected/reported.
I don't immediately see how that paper's assertion (that some areas' fMRI response is influenced by baseline oxygenation and cerebral blood flow) relate to the reliability of an information modeling experiment?
You can do a lot better than this if you redefine the problem from directly generating images with certain contrasts to maximizing information gain, even with weak magnets. They've since basically run out of money and are on life support, but Q Bio [0] had that tech working years ago, able to quickly derive many different image types from an entropy-maximizing scan, though they never deployed that in prod IIRC (again, they're broke).
I remember one of my diploma students continued with discrete tomography as PhD, topic "Binary Tomography by Iterating Linear Programs" and I found it super interesting to get down the number of shots and at the same time increasing the accuracy a lot.
It's a nice review but the end reads like a funding pitch.
The most important Mathematicians like donoho and Tao in the US seem to currently experience budget cuts and start to address the public.
Specifically, using a linear approach (like PCA, but slightly fancier), we find that stimulus-related information is present up along many, many dimensions of the neural response---much more than previously expected/reported.
[1] https://journals.plos.org/ploscompbiol/article?id=10.1371/jo...
I don't immediately see how that paper's assertion (that some areas' fMRI response is influenced by baseline oxygenation and cerebral blood flow) relate to the reliability of an information modeling experiment?
[0] q.bio