View Single Post
  #15   Spotlight this post!  
Unread 05-09-2006, 17:19
KenWittlief KenWittlief is offline
.
no team
Team Role: Engineer
 
Join Date: Mar 2003
Location: Rochester, NY
Posts: 4,213
KenWittlief has a reputation beyond reputeKenWittlief has a reputation beyond reputeKenWittlief has a reputation beyond reputeKenWittlief has a reputation beyond reputeKenWittlief has a reputation beyond reputeKenWittlief has a reputation beyond reputeKenWittlief has a reputation beyond reputeKenWittlief has a reputation beyond reputeKenWittlief has a reputation beyond reputeKenWittlief has a reputation beyond reputeKenWittlief has a reputation beyond repute
Re: A way to measure force...

Quote:
Originally Posted by dlavery
In simple terms, the pixel represents a region of data in the source image, and not just an infinitely small single point at the center of the pixel region. The information assigned to a pixel is based on the aggregation of the image properties across the entire image region assigned to the pixel. Using techniques such as superresolution processing, cascading image enhancement, multiframe quiver analysis, and multiscale morphological smoothers (typically nonlinear smoothing filters for detail enhancement at large scales), information can be extracted from a digital image "from between the pixels."
I guess I should have been more specific describing the techniques 'used' on these type of TV programs. A surveillance camera will capture someones face in one or two frames, maybe filling in 25% of the screen (whole face is ~200 by 200 pixels)

they freeze on that one frame of video and zoom in on the center of the persons eyeball (which might be 5 by 5 pixels)

and then zoom in till that center black area fills the screen, hit the "image enhancement" button, and there you see someones face you can clearly recognize.

Not going to happen. The techniques Dave is talking about are common with big screen TVs, and projection video systems. When video is moving there is more data present, and you can interpolate pixels from frame to frame, and in 3x3 or 4x4 pixel blocks to determine what is between the pixels (as the image moves and pans) to scale the blocks up to 5x5 or 7x7 pixels. This is zooming in roughly 2x or 3x.

in fact, if a camera is panning sideways you can grab successive frames and get stereoscopic 3D images from a single camera (similar to the way an owl bobs its head from side to side to increase its depth perception)

OK, so yes, you can take multiple frames of video and pull more info out, and you can interpolate and project what is probably between the adjacent pixels

but you cant zoom in on video 100x and clearly see a persons face reflected in a 5x5 pixel eyeball

but with film, it is possible. A 35mm frame has the equivalent resolution of 10M pixels, compared to 0.3M pixels for a high quality NTSC video single frame.

Last edited by KenWittlief : 05-09-2006 at 17:34.
Reply With Quote