Can someone ELI5 what this does? I read the abstract and tried to find differences in the provided examples, but I don't understand (and don't see) what the "photorealistic" part is.
Imagine history documentaries where they take an old photo and free objects from the background and move them round giving the illusion of parallax movement. This software does that in less than a second, creating a 3D model that can be accurately moved (or the camera for that matter) in your video editor. It's not new, but this one is fast and "sharp".
Until your comment I didn't realise I'd also read it wrong (despite getting the gist of it). Attempted rephrase of the original sentence:
Imagine history documentaries where they take an old photo, free objects from the background, and then move them round to give the illusion of parallax.
Takes a 2D image and allows you to simulate moving the angle of the camera with correct-ish parallax effect and proper subject isolation (seems to be able to handle multiple subjects in the same scene as well)
I guess this is what they use for the portrait mode effects.
It turns a single photo into a rough 3D scene so you can slightly move the camera and see new, realistic views. "Photorealistic" means it preserves real textures and lighting instead of a flat depth effect. Similar behavior can be seen with Apple's Spatial Scene feature in the Photos app: https://files.catbox.moe/93w7rw.mov
From a single picture it infers a hidden 3D representation, from which you can produce photorealistic images from slightly different vantage points (novel views).
I just want to emphasize that this is not a NERF where the model magically produces an image from an angle and then you ask "ok but how did you get this?" and it throws up its hands and says "I dunno, I ran some math and I got this image" :D.
Black Mirror episode portraying what this could do: https://youtu.be/XJIq_Dy--VA?t=14. If Apple ran SHARP on this photo and compared it to the show, that would be incredible.
Agreed, this is a terrible presentation. The paper abstract is bordering on word salad, the demo images are meaningless and don’t show any clear difference to the previous SotA, the introduction talks about “nearby” views while the images appear to show zooming in, etc.
Apple's Spatial Scene in the Photos app shows similar behavior, turning a single photo into a small 3D scene that you can view by tilting the phone. Demo here: https://files.catbox.moe/93w7rw.mov
In Chapter D.7 they describe: "The complex reflection in water is interpreted by the network as a distant mountain, therefore the water surface is broken."
This is really interesting to me because the model would have to encode the reflection as both the depth of the reflecting surface (for texture, scattering etc) as well as the "real depth" of the reflected object. The examples in Figure 11 and 12 already look amazing.
This is incredibly cool. It's interesting how it fails in the section where you need to in-paint. SVC seems to do that better than all the rest, though not anywhere close to the photorealism of this model.
Is there a similar flow but to transform either a video/photo/NeRF of a scene into a tighter, minimal polygon approximation of it. The reason I ask is that it would make some things really cool. To make my baby monitor mount I had to knock out the calipers and measure the pins and this and that, but if I could take a couple of photos and iterate in software that would be sick.
You'd still need one real measurement at least: this might get proportions right if background can be clearly separated, but the absolute size of an object can be worlds apart.
Have a look through the rest of the images. TMPI has some pretty obvious shortcomings in a lot of them.
1. Sky looks jank
2. Blurry/warped behind the horse
3. The head seems to move a lot more than the body. You could argue that this one is desirable
4. Bit of warping and ghosting around the edges of the flowers. Particularly noticeable towards the top of the image.
5. Very minor but the flowers move as if they aren't attached to the wall
I'm confused, does it actually generate environments from photographs? I can't view the galleries since I didn't sign up for emails but all of the gallery thumbnails are AI, not photos.
Works great, model file is 2.8 GB, on M2 rendering took a few seconds, result is guassian .ply file but repo implementation requires CUDA card to render video, I have used one of webgl live renderers from here https://github.com/scier/MetalSplatter?tab=readme-ov-file#re...
That is really impressive. However, it was a bit confusing at first because in the koala example at the top, the zoomed in area is only slightly bigger than the source area. I wonder why they didn't make it 2-3x as big in both axes like they did with the others.
I understand AI for reasoning, knowledge, etc. I haven't figured out how anyone wants to spend money for this visual and video stuff. It just seems like a bad idea.
Simulation. It takes a lot of effort today to bring up simulations in various fields. 3 D programming is very nontrivial and asset development is extremely expensive. If I have a workspace I can take a photo of and just use it to generate a 3d scene I can then use it in simulations to test ideas out. This is particularly useful in robotics and industrial automation already.
I don't see any examples of a 3D scene information usable for simulation. If you want to simulate something hitting a table, you need the whole table (surface) in space, not just some spatial illusion effect extrapolated from an image of a table. I also think modelling the 3D objects for simulation is the least expensive part of an simulation... the simulation is the expensive thing.
I doubt this will be useful for robotics or industrial automation, where you need an actual spatial, or functional understanding of the object/environment.
This specific paper is pretty different to the kind of photo/video generation that has been hyped up in recent years. In this case, I think this might be what they're using for the iOS spatial wallpaper feature, which is arguably useless but is definitely an aesthetic differentiator to Android devices. So, it's indirectly making money.
Do people not spend on entertainment? Commercials? It's probably less of a bad idea than knowledge. AI giving a bad visual has less negatives than giving the wrong knowledge leading to the wrong decision.
https://m.youtube.com/watch?v=DgPaCWJL7XI&t=1s&pp=2AEBkAIB0g...
https://www.youtube.com/watch?v=X0oSKFUnEXc
Gaussian splashing is pretty awesome.
Imagine history documentaries where they take an old photo, free objects from the background, and then move them round to give the illusion of parallax.
Even using commas, if you leave the ambiguous “free” I suggest you prefix “objects” with “the” or “any”.
I guess this is what they use for the portrait mode effects.
(I am oversimplifying).
I just want to emphasize that this is not a NERF where the model magically produces an image from an angle and then you ask "ok but how did you get this?" and it throws up its hands and says "I dunno, I ran some math and I got this image" :D.
Or if you prefer Blade Runner: https://youtu.be/qHepKd38pr0?t=107
https://github.com/apple/ml-sharp#rendering-trajectories-cud...
CUDA is needed to render side scrolling video, but there is many ways to do other things with result.
Photoshop content aware fill could do equally or better many years ago.
This is really interesting to me because the model would have to encode the reflection as both the depth of the reflecting surface (for texture, scattering etc) as well as the "real depth" of the reflected object. The examples in Figure 11 and 12 already look amazing.
Long tail problems indeed.
Is there a similar flow but to transform either a video/photo/NeRF of a scene into a tighter, minimal polygon approximation of it. The reason I ask is that it would make some things really cool. To make my baby monitor mount I had to knock out the calipers and measure the pins and this and that, but if I could take a couple of photos and iterate in software that would be sick.
Without that that it's hard to tell how cherry-picked the NVS video samples are.
EDIT: I did it myself, if anyone wants to check out the result (caveat, n=1): https://github.com/avaer/ml-sharp-example
1. Sky looks jank 2. Blurry/warped behind the horse 3. The head seems to move a lot more than the body. You could argue that this one is desirable 4. Bit of warping and ghosting around the edges of the flowers. Particularly noticeable towards the top of the image. 5. Very minor but the flowers move as if they aren't attached to the wall
[0]https://www.spaitial.ai/
It’s a website that collects people’s email addresses
Why no landscape or underwater scenes or something in space, etc.?
I believe this company is doing image (or text) -> off the shelf image model to generate more views -> some variant of gaussian splatting.
So they aren't really "generating" the world as one might imagine.
I doubt this will be useful for robotics or industrial automation, where you need an actual spatial, or functional understanding of the object/environment.