Solving this stuff can be as complex as raytracing calculations in a 3D renderer, so chances are you are indeed running out of resources. Have you tried using a lower res version and tracking that? Give or take specific FOV and jitter issues this should work and still line up with the original footage.
thanks for the tip Mylenium
I tried a half res 2 minute clip and it solved first time! Adjusting for the FOV the 3D camera looks pretty good when transferred back into a comp with the original 4K footage. I'll do some more tests on a short clip to see how the two tracks compare and reply to this thread with the results in case anyone else is interested.
Unfortunately, the same technique hasn't yielded the same result on the windows machine. The windows box (very similarly spec'd to the MacPro) just sits on 'solving camera' forever... I had a kind of related problem last week which was solved by switching from ProResHQ to Cineform but this is all cineform now... so frustrating... in my limited experience windows machines are super fast but super flakey.
The camera tracker will run our of resources as it tracks shots. All of that info has to be stored in memory and then calculations made. I know there are times when you want a minute and a half shot but that's awfully long for most movies. Even seeing a single tracked element move through a the frame in a minute and a half would be a very long time. If the shot was looking out the front of a car going 60 mph and you wanted to attach a 3D element to the shot it would start a mile and a half from the car so it would be pretty hard to see even if it was shot on 4K unless it was huge.
My suggestion would to be to break up the shots into manageable sections and then using timecode or markers build in your overlaps and composite the elements together. As soon as you get a good camera solve and have set your ground plane and the 3D elements I would delete the Camera tracker effect from the layer and save the project so all that tracking data would not have to be stored in the comp.
It's kind of the same thing with Warp Stabilizer. Long shots tend to kill the process as resources are used up and the only practical way to use a warp stabilized shot that is more than just a few seconds long is to render a DI and replace the footage in your project with the stabilized copy.
hey Todd, thanks for the tips link
It's curious that the camera tracker is less memory intensive than the warp stabiliser. I have yet to run into any problems with the Warp Stabiliser on these exact same clips. They stabilise fine it's the camera tracker that is freaking out...
in case this is interesting to any future humans I spent the day doing some very boring but productive tests. The sweet spot for this project was to transcode the original R3D footage down to about 3.5K. This allows the tracker to handle the 2 min long clips but still retain close (say 98%) of the detail of the original 4.5K transcodes when finally downscaled to 1080 (this is after after warp stabilising followed by camera tracking followed by 2d motion tracked repositioning... don't ask). Transcoding to 2.5 K was quite a bit quicker to stabilise and track but the final 1080 was mush compared to higher res sources.
all good, except on the damned windows machine. it's killing me. Both machines have 32Gb of RAM and very quick processors but the PC just will not solve... does the warp stabilisation no drama but the completely balks at the camera track... so much for my newly purchased tracking crunching box... sigh.