Both humans and computational methods struggle to discriminate the depths of objects hiddenbeneath foliage. However, such discrimination becomes feasible when we combine computationaloptical synthetic aperture sensing with the human ability to fuse stereoscopic images. For objectidentification tasks, as required in search and rescue, wildlife observation, surveillance, and earlywildfire detection, depth assists in differentiating true from false findings, such as people, animals, orvehicles vs. sun-heated patches at the ground level or in the tree crowns, or ground fires vs. tree trunks.We used video captured by a drone above dense woodland to test users? ability to discriminate depth.We found that this is impossible when viewing monoscopic video and relying on motion parallax. Thesame was true with stereoscopic video because of the occlusions caused by foliage. However, whensynthetic aperture sensing was used to reduce occlusions and disparity-scaled stereoscopic videowas presented, whereas computational (stereoscopic matching) methods were unsuccessful, humanobservers successfully discriminated depth. This shows the potential of systems which exploit thesynergy between computational methods and human vision to perform tasks that neither can performalone.